Test Report: Docker_Linux_crio 21847

                    
                      fa4d670f7aa2bf54fac775fb3c292483f6687320:2025-11-21:42430
                    
                

Test fail (38/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.24
35 TestAddons/parallel/Registry 12.82
36 TestAddons/parallel/RegistryCreds 0.42
37 TestAddons/parallel/Ingress 144.47
38 TestAddons/parallel/InspektorGadget 5.25
39 TestAddons/parallel/MetricsServer 5.32
41 TestAddons/parallel/CSI 34.84
42 TestAddons/parallel/Headlamp 2.36
43 TestAddons/parallel/CloudSpanner 5.25
44 TestAddons/parallel/LocalPath 10.05
45 TestAddons/parallel/NvidiaDevicePlugin 5.25
46 TestAddons/parallel/Yakd 5.25
47 TestAddons/parallel/AmdGpuDevicePlugin 6.24
97 TestFunctional/parallel/ServiceCmdConnect 602.66
116 TestFunctional/parallel/ImageCommands/ImageListShort 2.3
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.89
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.07
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.25
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.28
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.18
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.32
137 TestFunctional/parallel/ServiceCmd/DeployApp 600.53
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
153 TestFunctional/parallel/ServiceCmd/Format 0.52
154 TestFunctional/parallel/ServiceCmd/URL 0.51
191 TestJSONOutput/pause/Command 2.18
197 TestJSONOutput/unpause/Command 1.61
295 TestPause/serial/Pause 4.85
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.06
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.15
309 TestStartStop/group/old-k8s-version/serial/Pause 5.12
317 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.11
319 TestStartStop/group/no-preload/serial/Pause 6.31
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.09
338 TestStartStop/group/newest-cni/serial/Pause 5.53
340 TestStartStop/group/embed-certs/serial/Pause 6.44
349 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.39
377 TestStartStop/group/default-k8s-diff-port/serial/Pause 8.64
x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-243127 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-243127 addons disable volcano --alsologtostderr -v=1: exit status 11 (239.971566ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:57:52.291074   23919 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:57:52.291214   23919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:57:52.291222   23919 out.go:374] Setting ErrFile to fd 2...
	I1121 13:57:52.291226   23919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:57:52.291407   23919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 13:57:52.291650   23919 mustload.go:66] Loading cluster: addons-243127
	I1121 13:57:52.291947   23919 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:57:52.291961   23919 addons.go:622] checking whether the cluster is paused
	I1121 13:57:52.292038   23919 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:57:52.292049   23919 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:57:52.292381   23919 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:57:52.310098   23919 ssh_runner.go:195] Run: systemctl --version
	I1121 13:57:52.310148   23919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:57:52.326476   23919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:57:52.420056   23919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:57:52.420133   23919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:57:52.449520   23919 cri.go:89] found id: "024fd155c71f400a95e31fc7ad96222849e4a688d5188b8418494fff998b02f8"
	I1121 13:57:52.449544   23919 cri.go:89] found id: "a45184ba10995c917c66320d64b28434d66ae84a7641d0c7c7c9435196c72b05"
	I1121 13:57:52.449548   23919 cri.go:89] found id: "c7cea569e79d903c00cfe8fa08fc613df7758703b3a3365f91c8a868e223391a"
	I1121 13:57:52.449551   23919 cri.go:89] found id: "f87ee9ca1eb0d9c8526d317e4709114848a958759fe6996309f72e558fcc76bd"
	I1121 13:57:52.449554   23919 cri.go:89] found id: "b69dec080641a7059fc510bee4a22d19e85c4143780d1fee7044f2fdb740f740"
	I1121 13:57:52.449579   23919 cri.go:89] found id: "733ab2d4f270d078eb3b7fb75ffde7d300a333a9eb60360fe6d7d27fb0875dd7"
	I1121 13:57:52.449584   23919 cri.go:89] found id: "6d89596515e60da3c6350bc1ead48b9dbbf7532b2b24203a2cac8a9359f130a5"
	I1121 13:57:52.449588   23919 cri.go:89] found id: "12a7ad65b01f342d9f1789252a52da86937f6e873020a7041cff37d3b28aaf6f"
	I1121 13:57:52.449591   23919 cri.go:89] found id: "3179500ac77197cbb987c594d2e651bf8097474fdf25f2ce1512534d31d41788"
	I1121 13:57:52.449598   23919 cri.go:89] found id: "994774c9ca4f6f8e3514b947f0b8fda8fa47d7542203f328719915376b9619b8"
	I1121 13:57:52.449602   23919 cri.go:89] found id: "32c8ebd6346418c59c61269d61198ed0c8fdb4a99d46cc2ba298869b17e82675"
	I1121 13:57:52.449605   23919 cri.go:89] found id: "56057aee31072304afba5ad58c29a181e243ff2e0f856de3cc6a72a06aa40534"
	I1121 13:57:52.449609   23919 cri.go:89] found id: "77eb40d30250c88e9becb416f1d606fd898acc0a86c49c8005d72a9268c0d3f1"
	I1121 13:57:52.449613   23919 cri.go:89] found id: "7d1e97c795b004c26d0da895539dc886fe57268b3dac72ee7d7de356e86f6014"
	I1121 13:57:52.449619   23919 cri.go:89] found id: "b3df341d90d52b5ef2ee3a00f8e67c97d074f486b504b70f4bd9ca36e586af13"
	I1121 13:57:52.449626   23919 cri.go:89] found id: "a862b6c84241dc48d722f3ee0bd89241e61135843ff33148c7f534cbf5f5680c"
	I1121 13:57:52.449630   23919 cri.go:89] found id: "7d8b7a3c495d7923335171c6b39b7aaad4571152b404188c58e3055e39def27a"
	I1121 13:57:52.449635   23919 cri.go:89] found id: "8a5df4965546d743393a4b617fc40ffb5e887a5847cf860fc0a2bf8ca9d53262"
	I1121 13:57:52.449639   23919 cri.go:89] found id: "7fb6fcbbcafef908292ed5964ecd5904153b8c7e75af509515148e2266c74ecc"
	I1121 13:57:52.449643   23919 cri.go:89] found id: "66f416a2611818263d3d82ca7e1c1dc51f8f8373c1ef7e282ba527f4ea204081"
	I1121 13:57:52.449653   23919 cri.go:89] found id: "b49596a0b2d4dc21f077381ec50d10f903e56d22d3f6c4701ecced574385ab52"
	I1121 13:57:52.449662   23919 cri.go:89] found id: "6bc0a23d21b59305e5afddf76ffb436c468bd915a8bb02d9d71592825dafd624"
	I1121 13:57:52.449667   23919 cri.go:89] found id: "61ca322c069424488425150fa2cb0785f8dae51cb585322a28bd81a85d0796f1"
	I1121 13:57:52.449673   23919 cri.go:89] found id: "19610d1d8120b612dcda4fbab26734a764faf1bebe73cc951524320b474dea8b"
	I1121 13:57:52.449678   23919 cri.go:89] found id: ""
	I1121 13:57:52.449723   23919 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:57:52.463707   23919 out.go:203] 
	W1121 13:57:52.464818   23919 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:57:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:57:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:57:52.464838   23919 out.go:285] * 
	* 
	W1121 13:57:52.468021   23919 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:57:52.469132   23919 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-243127 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (12.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 2.793068ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-2dn55" [01d1fb94-7e93-4c68-b4a5-4a7aec2eeffb] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002041638s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-9k9gw" [7e745192-f6fe-4677-b1f9-90e0ca68e72e] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003260385s
addons_test.go:392: (dbg) Run:  kubectl --context addons-243127 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-243127 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-243127 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.331647s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-243127 ip
2025/11/21 13:58:13 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-243127 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-243127 addons disable registry --alsologtostderr -v=1: exit status 11 (262.018749ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:58:13.842741   26355 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:58:13.843014   26355 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:13.843023   26355 out.go:374] Setting ErrFile to fd 2...
	I1121 13:58:13.843027   26355 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:13.843186   26355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 13:58:13.843390   26355 mustload.go:66] Loading cluster: addons-243127
	I1121 13:58:13.843723   26355 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:13.843737   26355 addons.go:622] checking whether the cluster is paused
	I1121 13:58:13.843815   26355 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:13.843826   26355 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:58:13.844157   26355 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:58:13.863429   26355 ssh_runner.go:195] Run: systemctl --version
	I1121 13:58:13.863477   26355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:58:13.883782   26355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:58:13.986380   26355 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:58:13.986479   26355 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:58:14.019193   26355 cri.go:89] found id: "024fd155c71f400a95e31fc7ad96222849e4a688d5188b8418494fff998b02f8"
	I1121 13:58:14.019224   26355 cri.go:89] found id: "a45184ba10995c917c66320d64b28434d66ae84a7641d0c7c7c9435196c72b05"
	I1121 13:58:14.019230   26355 cri.go:89] found id: "c7cea569e79d903c00cfe8fa08fc613df7758703b3a3365f91c8a868e223391a"
	I1121 13:58:14.019234   26355 cri.go:89] found id: "f87ee9ca1eb0d9c8526d317e4709114848a958759fe6996309f72e558fcc76bd"
	I1121 13:58:14.019238   26355 cri.go:89] found id: "b69dec080641a7059fc510bee4a22d19e85c4143780d1fee7044f2fdb740f740"
	I1121 13:58:14.019243   26355 cri.go:89] found id: "733ab2d4f270d078eb3b7fb75ffde7d300a333a9eb60360fe6d7d27fb0875dd7"
	I1121 13:58:14.019246   26355 cri.go:89] found id: "6d89596515e60da3c6350bc1ead48b9dbbf7532b2b24203a2cac8a9359f130a5"
	I1121 13:58:14.019250   26355 cri.go:89] found id: "12a7ad65b01f342d9f1789252a52da86937f6e873020a7041cff37d3b28aaf6f"
	I1121 13:58:14.019255   26355 cri.go:89] found id: "3179500ac77197cbb987c594d2e651bf8097474fdf25f2ce1512534d31d41788"
	I1121 13:58:14.019278   26355 cri.go:89] found id: "994774c9ca4f6f8e3514b947f0b8fda8fa47d7542203f328719915376b9619b8"
	I1121 13:58:14.019285   26355 cri.go:89] found id: "32c8ebd6346418c59c61269d61198ed0c8fdb4a99d46cc2ba298869b17e82675"
	I1121 13:58:14.019290   26355 cri.go:89] found id: "56057aee31072304afba5ad58c29a181e243ff2e0f856de3cc6a72a06aa40534"
	I1121 13:58:14.019298   26355 cri.go:89] found id: "77eb40d30250c88e9becb416f1d606fd898acc0a86c49c8005d72a9268c0d3f1"
	I1121 13:58:14.019303   26355 cri.go:89] found id: "7d1e97c795b004c26d0da895539dc886fe57268b3dac72ee7d7de356e86f6014"
	I1121 13:58:14.019311   26355 cri.go:89] found id: "b3df341d90d52b5ef2ee3a00f8e67c97d074f486b504b70f4bd9ca36e586af13"
	I1121 13:58:14.019331   26355 cri.go:89] found id: "a862b6c84241dc48d722f3ee0bd89241e61135843ff33148c7f534cbf5f5680c"
	I1121 13:58:14.019340   26355 cri.go:89] found id: "7d8b7a3c495d7923335171c6b39b7aaad4571152b404188c58e3055e39def27a"
	I1121 13:58:14.019346   26355 cri.go:89] found id: "8a5df4965546d743393a4b617fc40ffb5e887a5847cf860fc0a2bf8ca9d53262"
	I1121 13:58:14.019350   26355 cri.go:89] found id: "7fb6fcbbcafef908292ed5964ecd5904153b8c7e75af509515148e2266c74ecc"
	I1121 13:58:14.019354   26355 cri.go:89] found id: "66f416a2611818263d3d82ca7e1c1dc51f8f8373c1ef7e282ba527f4ea204081"
	I1121 13:58:14.019357   26355 cri.go:89] found id: "b49596a0b2d4dc21f077381ec50d10f903e56d22d3f6c4701ecced574385ab52"
	I1121 13:58:14.019361   26355 cri.go:89] found id: "6bc0a23d21b59305e5afddf76ffb436c468bd915a8bb02d9d71592825dafd624"
	I1121 13:58:14.019365   26355 cri.go:89] found id: "61ca322c069424488425150fa2cb0785f8dae51cb585322a28bd81a85d0796f1"
	I1121 13:58:14.019372   26355 cri.go:89] found id: "19610d1d8120b612dcda4fbab26734a764faf1bebe73cc951524320b474dea8b"
	I1121 13:58:14.019379   26355 cri.go:89] found id: ""
	I1121 13:58:14.019430   26355 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:58:14.035419   26355 out.go:203] 
	W1121 13:58:14.036578   26355 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:58:14.036605   26355 out.go:285] * 
	* 
	W1121 13:58:14.041526   26355 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:58:14.042977   26355 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-243127 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (12.82s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.42s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.870059ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-243127
addons_test.go:332: (dbg) Run:  kubectl --context addons-243127 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-243127 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-243127 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (270.240591ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:58:12.008895   25730 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:58:12.009174   25730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:12.009185   25730 out.go:374] Setting ErrFile to fd 2...
	I1121 13:58:12.009191   25730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:12.009411   25730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 13:58:12.009690   25730 mustload.go:66] Loading cluster: addons-243127
	I1121 13:58:12.010042   25730 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:12.010060   25730 addons.go:622] checking whether the cluster is paused
	I1121 13:58:12.010159   25730 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:12.010174   25730 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:58:12.010548   25730 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:58:12.029391   25730 ssh_runner.go:195] Run: systemctl --version
	I1121 13:58:12.029451   25730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:58:12.049893   25730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:58:12.152213   25730 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:58:12.152298   25730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:58:12.193878   25730 cri.go:89] found id: "024fd155c71f400a95e31fc7ad96222849e4a688d5188b8418494fff998b02f8"
	I1121 13:58:12.193903   25730 cri.go:89] found id: "a45184ba10995c917c66320d64b28434d66ae84a7641d0c7c7c9435196c72b05"
	I1121 13:58:12.193908   25730 cri.go:89] found id: "c7cea569e79d903c00cfe8fa08fc613df7758703b3a3365f91c8a868e223391a"
	I1121 13:58:12.193913   25730 cri.go:89] found id: "f87ee9ca1eb0d9c8526d317e4709114848a958759fe6996309f72e558fcc76bd"
	I1121 13:58:12.193917   25730 cri.go:89] found id: "b69dec080641a7059fc510bee4a22d19e85c4143780d1fee7044f2fdb740f740"
	I1121 13:58:12.193925   25730 cri.go:89] found id: "733ab2d4f270d078eb3b7fb75ffde7d300a333a9eb60360fe6d7d27fb0875dd7"
	I1121 13:58:12.193929   25730 cri.go:89] found id: "6d89596515e60da3c6350bc1ead48b9dbbf7532b2b24203a2cac8a9359f130a5"
	I1121 13:58:12.193934   25730 cri.go:89] found id: "12a7ad65b01f342d9f1789252a52da86937f6e873020a7041cff37d3b28aaf6f"
	I1121 13:58:12.193939   25730 cri.go:89] found id: "3179500ac77197cbb987c594d2e651bf8097474fdf25f2ce1512534d31d41788"
	I1121 13:58:12.193950   25730 cri.go:89] found id: "994774c9ca4f6f8e3514b947f0b8fda8fa47d7542203f328719915376b9619b8"
	I1121 13:58:12.193958   25730 cri.go:89] found id: "32c8ebd6346418c59c61269d61198ed0c8fdb4a99d46cc2ba298869b17e82675"
	I1121 13:58:12.193963   25730 cri.go:89] found id: "56057aee31072304afba5ad58c29a181e243ff2e0f856de3cc6a72a06aa40534"
	I1121 13:58:12.193971   25730 cri.go:89] found id: "77eb40d30250c88e9becb416f1d606fd898acc0a86c49c8005d72a9268c0d3f1"
	I1121 13:58:12.193976   25730 cri.go:89] found id: "7d1e97c795b004c26d0da895539dc886fe57268b3dac72ee7d7de356e86f6014"
	I1121 13:58:12.193984   25730 cri.go:89] found id: "b3df341d90d52b5ef2ee3a00f8e67c97d074f486b504b70f4bd9ca36e586af13"
	I1121 13:58:12.194001   25730 cri.go:89] found id: "a862b6c84241dc48d722f3ee0bd89241e61135843ff33148c7f534cbf5f5680c"
	I1121 13:58:12.194010   25730 cri.go:89] found id: "7d8b7a3c495d7923335171c6b39b7aaad4571152b404188c58e3055e39def27a"
	I1121 13:58:12.194016   25730 cri.go:89] found id: "8a5df4965546d743393a4b617fc40ffb5e887a5847cf860fc0a2bf8ca9d53262"
	I1121 13:58:12.194019   25730 cri.go:89] found id: "7fb6fcbbcafef908292ed5964ecd5904153b8c7e75af509515148e2266c74ecc"
	I1121 13:58:12.194023   25730 cri.go:89] found id: "66f416a2611818263d3d82ca7e1c1dc51f8f8373c1ef7e282ba527f4ea204081"
	I1121 13:58:12.194027   25730 cri.go:89] found id: "b49596a0b2d4dc21f077381ec50d10f903e56d22d3f6c4701ecced574385ab52"
	I1121 13:58:12.194032   25730 cri.go:89] found id: "6bc0a23d21b59305e5afddf76ffb436c468bd915a8bb02d9d71592825dafd624"
	I1121 13:58:12.194038   25730 cri.go:89] found id: "61ca322c069424488425150fa2cb0785f8dae51cb585322a28bd81a85d0796f1"
	I1121 13:58:12.194043   25730 cri.go:89] found id: "19610d1d8120b612dcda4fbab26734a764faf1bebe73cc951524320b474dea8b"
	I1121 13:58:12.194050   25730 cri.go:89] found id: ""
	I1121 13:58:12.194123   25730 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:58:12.209850   25730 out.go:203] 
	W1121 13:58:12.211230   25730 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:58:12.211263   25730 out.go:285] * 
	* 
	W1121 13:58:12.214234   25730 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:58:12.215384   25730 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-243127 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.42s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-243127 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-243127 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-243127 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [a308fa7e-217b-4118-9a31-24408389206f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [a308fa7e-217b-4118-9a31-24408389206f] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003332057s
I1121 13:58:20.178934   14542 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-243127 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-243127 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.149973137s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-243127 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-243127 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-243127
helpers_test.go:243: (dbg) docker inspect addons-243127:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ec68ec4d468a95ec5b7d062b698336448aaad0a936ca78985ffe849253396a6",
	        "Created": "2025-11-21T13:56:12.147939743Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16559,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T13:56:12.179022991Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/1ec68ec4d468a95ec5b7d062b698336448aaad0a936ca78985ffe849253396a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ec68ec4d468a95ec5b7d062b698336448aaad0a936ca78985ffe849253396a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ec68ec4d468a95ec5b7d062b698336448aaad0a936ca78985ffe849253396a6/hosts",
	        "LogPath": "/var/lib/docker/containers/1ec68ec4d468a95ec5b7d062b698336448aaad0a936ca78985ffe849253396a6/1ec68ec4d468a95ec5b7d062b698336448aaad0a936ca78985ffe849253396a6-json.log",
	        "Name": "/addons-243127",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-243127:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-243127",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1ec68ec4d468a95ec5b7d062b698336448aaad0a936ca78985ffe849253396a6",
	                "LowerDir": "/var/lib/docker/overlay2/6d71564661e888343ffb73fedd00b3c321cf896f2e5f2566c0291ee2d3de8cea-init/diff:/var/lib/docker/overlay2/52a35d389e2d282a009a54c52e3f2c3f22a4d7ab5a2644a29d16127d21682576/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6d71564661e888343ffb73fedd00b3c321cf896f2e5f2566c0291ee2d3de8cea/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6d71564661e888343ffb73fedd00b3c321cf896f2e5f2566c0291ee2d3de8cea/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6d71564661e888343ffb73fedd00b3c321cf896f2e5f2566c0291ee2d3de8cea/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-243127",
	                "Source": "/var/lib/docker/volumes/addons-243127/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-243127",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-243127",
	                "name.minikube.sigs.k8s.io": "addons-243127",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8b8bdd0822504c4079d5cb8cc517c2980d440ada4412b4ecf716ef587aef2f4b",
	            "SandboxKey": "/var/run/docker/netns/8b8bdd082250",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-243127": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1c1ab31aac030cbbaad7777676044a32d96f21040721f0aee934d252b39cf533",
	                    "EndpointID": "631164ff79e4da350f56ada42b6efad7975f084b6ec5d08b273c5bd9649c0d0c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "22:43:38:04:42:7c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-243127",
	                        "1ec68ec4d468"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-243127 -n addons-243127
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-243127 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-243127 logs -n 25: (1.076619199s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-248688 --alsologtostderr --binary-mirror http://127.0.0.1:45793 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-248688 │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │                     │
	│ delete  │ -p binary-mirror-248688                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-248688 │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │ 21 Nov 25 13:55 UTC │
	│ addons  │ disable dashboard -p addons-243127                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │                     │
	│ addons  │ enable dashboard -p addons-243127                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │                     │
	│ start   │ -p addons-243127 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │ 21 Nov 25 13:57 UTC │
	│ addons  │ addons-243127 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:57 UTC │                     │
	│ addons  │ addons-243127 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:58 UTC │                     │
	│ addons  │ enable headlamp -p addons-243127 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:58 UTC │                     │
	│ addons  │ addons-243127 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:58 UTC │                     │
	│ addons  │ addons-243127 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:58 UTC │                     │
	│ addons  │ addons-243127 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:58 UTC │                     │
	│ addons  │ addons-243127 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:58 UTC │                     │
	│ addons  │ addons-243127 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:58 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-243127                                                                                                                                                                                                                                                                                                                                                                                           │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:58 UTC │ 21 Nov 25 13:58 UTC │
	│ addons  │ addons-243127 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:58 UTC │                     │
	│ ip      │ addons-243127 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:58 UTC │ 21 Nov 25 13:58 UTC │
	│ addons  │ addons-243127 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:58 UTC │                     │
	│ addons  │ addons-243127 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:58 UTC │                     │
	│ ssh     │ addons-243127 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:58 UTC │                     │
	│ ssh     │ addons-243127 ssh cat /opt/local-path-provisioner/pvc-c29fcf8b-bba0-4719-9424-7448a031a85f_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:58 UTC │ 21 Nov 25 13:58 UTC │
	│ addons  │ addons-243127 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:58 UTC │                     │
	│ addons  │ addons-243127 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:58 UTC │                     │
	│ addons  │ addons-243127 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:58 UTC │                     │
	│ addons  │ addons-243127 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 13:58 UTC │                     │
	│ ip      │ addons-243127 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-243127        │ jenkins │ v1.37.0 │ 21 Nov 25 14:00 UTC │ 21 Nov 25 14:00 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 13:55:49
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 13:55:49.057153   15892 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:55:49.057381   15892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:55:49.057397   15892 out.go:374] Setting ErrFile to fd 2...
	I1121 13:55:49.057403   15892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:55:49.057632   15892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 13:55:49.058141   15892 out.go:368] Setting JSON to false
	I1121 13:55:49.058949   15892 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2298,"bootTime":1763731051,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 13:55:49.059017   15892 start.go:143] virtualization: kvm guest
	I1121 13:55:49.060484   15892 out.go:179] * [addons-243127] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 13:55:49.061589   15892 notify.go:221] Checking for updates...
	I1121 13:55:49.061609   15892 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 13:55:49.062745   15892 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 13:55:49.063978   15892 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 13:55:49.065043   15892 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 13:55:49.066036   15892 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 13:55:49.067032   15892 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 13:55:49.068706   15892 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 13:55:49.091488   15892 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 13:55:49.091591   15892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:55:49.147037   15892 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-21 13:55:49.138675185 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 13:55:49.147139   15892 docker.go:319] overlay module found
	I1121 13:55:49.148616   15892 out.go:179] * Using the docker driver based on user configuration
	I1121 13:55:49.149629   15892 start.go:309] selected driver: docker
	I1121 13:55:49.149640   15892 start.go:930] validating driver "docker" against <nil>
	I1121 13:55:49.149649   15892 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 13:55:49.150161   15892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:55:49.200423   15892 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-21 13:55:49.191843813 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 13:55:49.200588   15892 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 13:55:49.200811   15892 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 13:55:49.202182   15892 out.go:179] * Using Docker driver with root privileges
	I1121 13:55:49.203206   15892 cni.go:84] Creating CNI manager for ""
	I1121 13:55:49.203256   15892 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 13:55:49.203265   15892 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 13:55:49.203315   15892 start.go:353] cluster config:
	{Name:addons-243127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-243127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1121 13:55:49.204405   15892 out.go:179] * Starting "addons-243127" primary control-plane node in "addons-243127" cluster
	I1121 13:55:49.205367   15892 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 13:55:49.206479   15892 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 13:55:49.207514   15892 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 13:55:49.207544   15892 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1121 13:55:49.207554   15892 cache.go:65] Caching tarball of preloaded images
	I1121 13:55:49.207586   15892 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 13:55:49.207655   15892 preload.go:238] Found /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1121 13:55:49.207669   15892 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 13:55:49.207996   15892 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/config.json ...
	I1121 13:55:49.208023   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/config.json: {Name:mkce95b84801e2e0b8601121d7dfde29ce254004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:55:49.222636   15892 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1121 13:55:49.222726   15892 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1121 13:55:49.222745   15892 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory, skipping pull
	I1121 13:55:49.222751   15892 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in cache, skipping pull
	I1121 13:55:49.222764   15892 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	I1121 13:55:49.222774   15892 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from local cache
	I1121 13:56:01.080194   15892 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from cached tarball
	I1121 13:56:01.080232   15892 cache.go:243] Successfully downloaded all kic artifacts
	I1121 13:56:01.080284   15892 start.go:360] acquireMachinesLock for addons-243127: {Name:mkea124a4b7a8ba801648345708233fc7b1fdc41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 13:56:01.080387   15892 start.go:364] duration metric: took 80.734µs to acquireMachinesLock for "addons-243127"
	I1121 13:56:01.080416   15892 start.go:93] Provisioning new machine with config: &{Name:addons-243127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-243127 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 13:56:01.080509   15892 start.go:125] createHost starting for "" (driver="docker")
	I1121 13:56:01.082193   15892 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1121 13:56:01.082411   15892 start.go:159] libmachine.API.Create for "addons-243127" (driver="docker")
	I1121 13:56:01.082447   15892 client.go:173] LocalClient.Create starting
	I1121 13:56:01.082545   15892 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem
	I1121 13:56:01.258168   15892 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem
	I1121 13:56:01.344661   15892 cli_runner.go:164] Run: docker network inspect addons-243127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 13:56:01.360952   15892 cli_runner.go:211] docker network inspect addons-243127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 13:56:01.361010   15892 network_create.go:284] running [docker network inspect addons-243127] to gather additional debugging logs...
	I1121 13:56:01.361031   15892 cli_runner.go:164] Run: docker network inspect addons-243127
	W1121 13:56:01.375481   15892 cli_runner.go:211] docker network inspect addons-243127 returned with exit code 1
	I1121 13:56:01.375504   15892 network_create.go:287] error running [docker network inspect addons-243127]: docker network inspect addons-243127: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-243127 not found
	I1121 13:56:01.375518   15892 network_create.go:289] output of [docker network inspect addons-243127]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-243127 not found
	
	** /stderr **
	I1121 13:56:01.375650   15892 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 13:56:01.390468   15892 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dd9820}
	I1121 13:56:01.390503   15892 network_create.go:124] attempt to create docker network addons-243127 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1121 13:56:01.390547   15892 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-243127 addons-243127
	I1121 13:56:01.432170   15892 network_create.go:108] docker network addons-243127 192.168.49.0/24 created
	I1121 13:56:01.432206   15892 kic.go:121] calculated static IP "192.168.49.2" for the "addons-243127" container
	I1121 13:56:01.432268   15892 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 13:56:01.446919   15892 cli_runner.go:164] Run: docker volume create addons-243127 --label name.minikube.sigs.k8s.io=addons-243127 --label created_by.minikube.sigs.k8s.io=true
	I1121 13:56:01.462258   15892 oci.go:103] Successfully created a docker volume addons-243127
	I1121 13:56:01.462323   15892 cli_runner.go:164] Run: docker run --rm --name addons-243127-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-243127 --entrypoint /usr/bin/test -v addons-243127:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 13:56:07.853063   15892 cli_runner.go:217] Completed: docker run --rm --name addons-243127-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-243127 --entrypoint /usr/bin/test -v addons-243127:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib: (6.390702863s)
	I1121 13:56:07.853088   15892 oci.go:107] Successfully prepared a docker volume addons-243127
	I1121 13:56:07.853137   15892 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 13:56:07.853149   15892 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 13:56:07.853203   15892 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-243127:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 13:56:12.081204   15892 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-243127:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.227968178s)
	I1121 13:56:12.081230   15892 kic.go:203] duration metric: took 4.22807771s to extract preloaded images to volume ...
	W1121 13:56:12.081301   15892 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1121 13:56:12.081348   15892 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1121 13:56:12.081384   15892 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 13:56:12.133464   15892 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-243127 --name addons-243127 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-243127 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-243127 --network addons-243127 --ip 192.168.49.2 --volume addons-243127:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 13:56:12.421642   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Running}}
	I1121 13:56:12.439685   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:12.458665   15892 cli_runner.go:164] Run: docker exec addons-243127 stat /var/lib/dpkg/alternatives/iptables
	I1121 13:56:12.505446   15892 oci.go:144] the created container "addons-243127" has a running status.
	I1121 13:56:12.505481   15892 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa...
	I1121 13:56:12.631777   15892 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 13:56:12.659186   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:12.680107   15892 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 13:56:12.680124   15892 kic_runner.go:114] Args: [docker exec --privileged addons-243127 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 13:56:12.724665   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:12.747889   15892 machine.go:94] provisionDockerMachine start ...
	I1121 13:56:12.747996   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:12.767339   15892 main.go:143] libmachine: Using SSH client type: native
	I1121 13:56:12.767642   15892 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1121 13:56:12.767666   15892 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 13:56:12.899016   15892 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-243127
	
	I1121 13:56:12.899040   15892 ubuntu.go:182] provisioning hostname "addons-243127"
	I1121 13:56:12.899093   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:12.916800   15892 main.go:143] libmachine: Using SSH client type: native
	I1121 13:56:12.917000   15892 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1121 13:56:12.917017   15892 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-243127 && echo "addons-243127" | sudo tee /etc/hostname
	I1121 13:56:13.053816   15892 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-243127
	
	I1121 13:56:13.053905   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:13.072412   15892 main.go:143] libmachine: Using SSH client type: native
	I1121 13:56:13.072637   15892 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1121 13:56:13.072655   15892 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-243127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-243127/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-243127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 13:56:13.197111   15892 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 13:56:13.197134   15892 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11045/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11045/.minikube}
	I1121 13:56:13.197151   15892 ubuntu.go:190] setting up certificates
	I1121 13:56:13.197162   15892 provision.go:84] configureAuth start
	I1121 13:56:13.197219   15892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-243127
	I1121 13:56:13.213450   15892 provision.go:143] copyHostCerts
	I1121 13:56:13.213522   15892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem (1078 bytes)
	I1121 13:56:13.213653   15892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem (1123 bytes)
	I1121 13:56:13.213720   15892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem (1679 bytes)
	I1121 13:56:13.213777   15892 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem org=jenkins.addons-243127 san=[127.0.0.1 192.168.49.2 addons-243127 localhost minikube]
	I1121 13:56:13.336773   15892 provision.go:177] copyRemoteCerts
	I1121 13:56:13.336818   15892 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 13:56:13.336848   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:13.353740   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:13.445589   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 13:56:13.462743   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1121 13:56:13.478249   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 13:56:13.493470   15892 provision.go:87] duration metric: took 296.286511ms to configureAuth
	I1121 13:56:13.493488   15892 ubuntu.go:206] setting minikube options for container-runtime
	I1121 13:56:13.493679   15892 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:56:13.493811   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:13.509651   15892 main.go:143] libmachine: Using SSH client type: native
	I1121 13:56:13.509857   15892 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1121 13:56:13.509881   15892 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 13:56:13.760189   15892 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 13:56:13.760214   15892 machine.go:97] duration metric: took 1.012300095s to provisionDockerMachine
	I1121 13:56:13.760224   15892 client.go:176] duration metric: took 12.6777646s to LocalClient.Create
	I1121 13:56:13.760242   15892 start.go:167] duration metric: took 12.677831425s to libmachine.API.Create "addons-243127"
	I1121 13:56:13.760252   15892 start.go:293] postStartSetup for "addons-243127" (driver="docker")
	I1121 13:56:13.760263   15892 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 13:56:13.760314   15892 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 13:56:13.760361   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:13.776420   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:13.868934   15892 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 13:56:13.872119   15892 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 13:56:13.872143   15892 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 13:56:13.872152   15892 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/addons for local assets ...
	I1121 13:56:13.872206   15892 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/files for local assets ...
	I1121 13:56:13.872233   15892 start.go:296] duration metric: took 111.975104ms for postStartSetup
	I1121 13:56:13.872494   15892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-243127
	I1121 13:56:13.889054   15892 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/config.json ...
	I1121 13:56:13.889320   15892 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 13:56:13.889367   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:13.904631   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:13.993646   15892 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 13:56:13.998006   15892 start.go:128] duration metric: took 12.917483058s to createHost
	I1121 13:56:13.998022   15892 start.go:83] releasing machines lock for "addons-243127", held for 12.91762329s
	I1121 13:56:13.998082   15892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-243127
	I1121 13:56:14.013323   15892 ssh_runner.go:195] Run: cat /version.json
	I1121 13:56:14.013359   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:14.013413   15892 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 13:56:14.013467   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:14.029416   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:14.031616   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:14.178855   15892 ssh_runner.go:195] Run: systemctl --version
	I1121 13:56:14.184530   15892 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 13:56:14.215444   15892 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 13:56:14.219879   15892 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 13:56:14.219930   15892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 13:56:14.243256   15892 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 13:56:14.243276   15892 start.go:496] detecting cgroup driver to use...
	I1121 13:56:14.243306   15892 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 13:56:14.243345   15892 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 13:56:14.257146   15892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 13:56:14.267623   15892 docker.go:218] disabling cri-docker service (if available) ...
	I1121 13:56:14.267672   15892 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 13:56:14.281734   15892 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 13:56:14.297135   15892 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 13:56:14.375212   15892 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 13:56:14.456799   15892 docker.go:234] disabling docker service ...
	I1121 13:56:14.456861   15892 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 13:56:14.473705   15892 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 13:56:14.484331   15892 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 13:56:14.561730   15892 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 13:56:14.637031   15892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 13:56:14.647498   15892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 13:56:14.659765   15892 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 13:56:14.659818   15892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:14.668839   15892 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1121 13:56:14.668881   15892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:14.676341   15892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:14.683945   15892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:14.691259   15892 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 13:56:14.698092   15892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:14.705424   15892 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:14.716901   15892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:14.724286   15892 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 13:56:14.730656   15892 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1121 13:56:14.730698   15892 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1121 13:56:14.741242   15892 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 13:56:14.747538   15892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 13:56:14.820143   15892 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 13:56:14.942432   15892 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 13:56:14.942488   15892 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 13:56:14.945985   15892 start.go:564] Will wait 60s for crictl version
	I1121 13:56:14.946030   15892 ssh_runner.go:195] Run: which crictl
	I1121 13:56:14.949177   15892 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 13:56:14.972995   15892 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 13:56:14.973077   15892 ssh_runner.go:195] Run: crio --version
	I1121 13:56:14.997307   15892 ssh_runner.go:195] Run: crio --version
	I1121 13:56:15.023918   15892 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 13:56:15.024984   15892 cli_runner.go:164] Run: docker network inspect addons-243127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 13:56:15.040870   15892 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1121 13:56:15.044477   15892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 13:56:15.053821   15892 kubeadm.go:884] updating cluster {Name:addons-243127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-243127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 13:56:15.053923   15892 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 13:56:15.053966   15892 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 13:56:15.082096   15892 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 13:56:15.082112   15892 crio.go:433] Images already preloaded, skipping extraction
	I1121 13:56:15.082152   15892 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 13:56:15.103636   15892 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 13:56:15.103653   15892 cache_images.go:86] Images are preloaded, skipping loading
	I1121 13:56:15.103659   15892 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1121 13:56:15.103740   15892 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-243127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-243127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 13:56:15.103792   15892 ssh_runner.go:195] Run: crio config
	I1121 13:56:15.144422   15892 cni.go:84] Creating CNI manager for ""
	I1121 13:56:15.144439   15892 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 13:56:15.144455   15892 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 13:56:15.144476   15892 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-243127 NodeName:addons-243127 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 13:56:15.144615   15892 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-243127"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 13:56:15.144664   15892 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 13:56:15.151720   15892 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 13:56:15.151763   15892 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 13:56:15.159128   15892 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1121 13:56:15.170317   15892 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 13:56:15.183701   15892 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1121 13:56:15.194636   15892 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1121 13:56:15.197680   15892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 13:56:15.206303   15892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 13:56:15.278921   15892 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 13:56:15.300342   15892 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127 for IP: 192.168.49.2
	I1121 13:56:15.300356   15892 certs.go:195] generating shared ca certs ...
	I1121 13:56:15.300369   15892 certs.go:227] acquiring lock for ca certs: {Name:mkde3a7d6f17b238f06eab3a140993599f1b4367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:15.300477   15892 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key
	I1121 13:56:15.471299   15892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt ...
	I1121 13:56:15.471325   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt: {Name:mk61b49fa89e084ba2749969322820f2bb2c6d21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:15.471494   15892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key ...
	I1121 13:56:15.471510   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key: {Name:mke7fb5f0ae9e7ba8c7140d87cbc59455899f32a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:15.471630   15892 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key
	I1121 13:56:15.636314   15892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.crt ...
	I1121 13:56:15.636339   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.crt: {Name:mk0a574429af51245df02d07a08a97d85f76ece6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:15.636483   15892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key ...
	I1121 13:56:15.636494   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key: {Name:mk2a9cdd54e0b1b68111efd8b987f1d2a79ad5cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:15.636580   15892 certs.go:257] generating profile certs ...
	I1121 13:56:15.636636   15892 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.key
	I1121 13:56:15.636650   15892 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt with IP's: []
	I1121 13:56:15.781577   15892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt ...
	I1121 13:56:15.781599   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: {Name:mk1d9c1991e5dfc8fd2703c373557eebcfd0a745 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:15.781734   15892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.key ...
	I1121 13:56:15.781744   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.key: {Name:mk5f74bf97b7db47fe3a4f6a5e196a3f3088b2ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:15.781808   15892 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.key.285be2fb
	I1121 13:56:15.781825   15892 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.crt.285be2fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1121 13:56:16.010672   15892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.crt.285be2fb ...
	I1121 13:56:16.010694   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.crt.285be2fb: {Name:mk90ccdf18068edc086dc7f222dd06f21dbf5c8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:16.010840   15892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.key.285be2fb ...
	I1121 13:56:16.010853   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.key.285be2fb: {Name:mkb3b2eb936646a224b260dbeb3c4c9ffc2b4d76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:16.010919   15892 certs.go:382] copying /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.crt.285be2fb -> /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.crt
	I1121 13:56:16.010987   15892 certs.go:386] copying /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.key.285be2fb -> /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.key
	I1121 13:56:16.011034   15892 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/proxy-client.key
	I1121 13:56:16.011050   15892 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/proxy-client.crt with IP's: []
	I1121 13:56:16.271488   15892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/proxy-client.crt ...
	I1121 13:56:16.271519   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/proxy-client.crt: {Name:mk55ebbc2f27359aac3b7bea8e90ef2f44a5f8c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:16.271675   15892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/proxy-client.key ...
	I1121 13:56:16.271686   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/proxy-client.key: {Name:mk052d62d16404c6504555f85be9a6b81ddecae7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:16.271855   15892 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 13:56:16.271887   15892 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem (1078 bytes)
	I1121 13:56:16.271907   15892 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem (1123 bytes)
	I1121 13:56:16.271930   15892 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem (1679 bytes)
	I1121 13:56:16.272486   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 13:56:16.289306   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 13:56:16.305008   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 13:56:16.320330   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1121 13:56:16.335307   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1121 13:56:16.350489   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 13:56:16.366092   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 13:56:16.381039   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 13:56:16.396223   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 13:56:16.412959   15892 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 13:56:16.424035   15892 ssh_runner.go:195] Run: openssl version
	I1121 13:56:16.429259   15892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 13:56:16.438455   15892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 13:56:16.441659   15892 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 13:56:16.441704   15892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 13:56:16.474470   15892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 13:56:16.481922   15892 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 13:56:16.484952   15892 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 13:56:16.484993   15892 kubeadm.go:401] StartCluster: {Name:addons-243127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-243127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 13:56:16.485050   15892 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:56:16.485082   15892 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:56:16.509135   15892 cri.go:89] found id: ""
	I1121 13:56:16.509177   15892 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 13:56:16.515988   15892 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 13:56:16.522710   15892 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 13:56:16.522749   15892 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 13:56:16.529292   15892 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 13:56:16.529309   15892 kubeadm.go:158] found existing configuration files:
	
	I1121 13:56:16.529348   15892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 13:56:16.535778   15892 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 13:56:16.535812   15892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 13:56:16.542224   15892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 13:56:16.548661   15892 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 13:56:16.548705   15892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 13:56:16.554972   15892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 13:56:16.561551   15892 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 13:56:16.561609   15892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 13:56:16.567780   15892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 13:56:16.574277   15892 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 13:56:16.574318   15892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 13:56:16.580489   15892 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 13:56:16.612951   15892 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 13:56:16.613048   15892 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 13:56:16.632087   15892 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 13:56:16.632179   15892 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 13:56:16.632253   15892 kubeadm.go:319] OS: Linux
	I1121 13:56:16.632298   15892 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 13:56:16.632366   15892 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 13:56:16.632428   15892 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 13:56:16.632478   15892 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 13:56:16.632567   15892 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 13:56:16.632643   15892 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 13:56:16.632733   15892 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 13:56:16.632780   15892 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 13:56:16.681905   15892 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 13:56:16.682066   15892 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 13:56:16.682233   15892 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 13:56:16.688772   15892 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 13:56:16.691049   15892 out.go:252]   - Generating certificates and keys ...
	I1121 13:56:16.691143   15892 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 13:56:16.691233   15892 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 13:56:16.801776   15892 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 13:56:16.973155   15892 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 13:56:17.276734   15892 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 13:56:17.397878   15892 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 13:56:17.706843   15892 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 13:56:17.707001   15892 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-243127 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1121 13:56:18.151055   15892 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 13:56:18.151216   15892 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-243127 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1121 13:56:18.290947   15892 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 13:56:18.430148   15892 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 13:56:19.049420   15892 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 13:56:19.049512   15892 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 13:56:19.338837   15892 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 13:56:19.437878   15892 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 13:56:19.651247   15892 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 13:56:19.802783   15892 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 13:56:20.356874   15892 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 13:56:20.357347   15892 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 13:56:20.360907   15892 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 13:56:20.362394   15892 out.go:252]   - Booting up control plane ...
	I1121 13:56:20.362474   15892 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 13:56:20.362543   15892 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 13:56:20.362948   15892 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 13:56:20.375185   15892 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 13:56:20.375294   15892 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 13:56:20.381073   15892 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 13:56:20.381362   15892 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 13:56:20.381441   15892 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 13:56:20.472352   15892 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 13:56:20.472493   15892 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 13:56:20.973332   15892 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.053268ms
	I1121 13:56:20.976151   15892 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 13:56:20.976271   15892 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1121 13:56:20.976364   15892 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 13:56:20.976440   15892 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 13:56:22.097814   15892 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.121478607s
	I1121 13:56:22.759437   15892 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.783162469s
	I1121 13:56:24.477965   15892 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501711789s
	I1121 13:56:24.487731   15892 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 13:56:24.496049   15892 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 13:56:24.503673   15892 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 13:56:24.503960   15892 kubeadm.go:319] [mark-control-plane] Marking the node addons-243127 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 13:56:24.511144   15892 kubeadm.go:319] [bootstrap-token] Using token: zw15bu.qcstwz6fx1p3zpbt
	I1121 13:56:24.512387   15892 out.go:252]   - Configuring RBAC rules ...
	I1121 13:56:24.512524   15892 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 13:56:24.514841   15892 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 13:56:24.519016   15892 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 13:56:24.521766   15892 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 13:56:24.523748   15892 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 13:56:24.525654   15892 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 13:56:24.883187   15892 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 13:56:25.297065   15892 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 13:56:25.882940   15892 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 13:56:25.883777   15892 kubeadm.go:319] 
	I1121 13:56:25.883865   15892 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 13:56:25.883874   15892 kubeadm.go:319] 
	I1121 13:56:25.883976   15892 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 13:56:25.883997   15892 kubeadm.go:319] 
	I1121 13:56:25.884030   15892 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 13:56:25.884118   15892 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 13:56:25.884202   15892 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 13:56:25.884212   15892 kubeadm.go:319] 
	I1121 13:56:25.884276   15892 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 13:56:25.884287   15892 kubeadm.go:319] 
	I1121 13:56:25.884354   15892 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 13:56:25.884363   15892 kubeadm.go:319] 
	I1121 13:56:25.884434   15892 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 13:56:25.884531   15892 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 13:56:25.884647   15892 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 13:56:25.884662   15892 kubeadm.go:319] 
	I1121 13:56:25.884793   15892 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 13:56:25.884867   15892 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 13:56:25.884874   15892 kubeadm.go:319] 
	I1121 13:56:25.884976   15892 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zw15bu.qcstwz6fx1p3zpbt \
	I1121 13:56:25.885118   15892 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f61f1a5a9a2c6e402420e419bcf82211dd9cf42c2d71b101000a986289f66d60 \
	I1121 13:56:25.885140   15892 kubeadm.go:319] 	--control-plane 
	I1121 13:56:25.885147   15892 kubeadm.go:319] 
	I1121 13:56:25.885259   15892 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 13:56:25.885269   15892 kubeadm.go:319] 
	I1121 13:56:25.885344   15892 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zw15bu.qcstwz6fx1p3zpbt \
	I1121 13:56:25.885469   15892 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f61f1a5a9a2c6e402420e419bcf82211dd9cf42c2d71b101000a986289f66d60 
	I1121 13:56:25.887390   15892 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 13:56:25.887548   15892 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 13:56:25.887607   15892 cni.go:84] Creating CNI manager for ""
	I1121 13:56:25.887626   15892 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 13:56:25.889059   15892 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 13:56:25.890142   15892 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 13:56:25.894147   15892 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 13:56:25.894161   15892 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 13:56:25.906093   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 13:56:26.088873   15892 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 13:56:26.088959   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:26.088974   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-243127 minikube.k8s.io/updated_at=2025_11_21T13_56_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=addons-243127 minikube.k8s.io/primary=true
	I1121 13:56:26.167598   15892 ops.go:34] apiserver oom_adj: -16
	I1121 13:56:26.167695   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:26.667916   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:27.168465   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:27.668036   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:28.168585   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:28.667821   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:29.168454   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:29.667892   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:30.168312   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:30.668161   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:31.167740   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:31.226960   15892 kubeadm.go:1114] duration metric: took 5.138055817s to wait for elevateKubeSystemPrivileges
	I1121 13:56:31.227003   15892 kubeadm.go:403] duration metric: took 14.742010813s to StartCluster
	I1121 13:56:31.227023   15892 settings.go:142] acquiring lock: {Name:mkb207cf001a407898b2dbfd9fb9b3881f173a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:31.227128   15892 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 13:56:31.227546   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:31.227738   15892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 13:56:31.227761   15892 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 13:56:31.227815   15892 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1121 13:56:31.227950   15892 addons.go:70] Setting gcp-auth=true in profile "addons-243127"
	I1121 13:56:31.227963   15892 addons.go:70] Setting ingress-dns=true in profile "addons-243127"
	I1121 13:56:31.227978   15892 addons.go:239] Setting addon ingress-dns=true in "addons-243127"
	I1121 13:56:31.227983   15892 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-243127"
	I1121 13:56:31.227984   15892 addons.go:70] Setting registry-creds=true in profile "addons-243127"
	I1121 13:56:31.227995   15892 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-243127"
	I1121 13:56:31.228013   15892 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:56:31.228030   15892 addons.go:70] Setting inspektor-gadget=true in profile "addons-243127"
	I1121 13:56:31.228037   15892 addons.go:70] Setting volcano=true in profile "addons-243127"
	I1121 13:56:31.228023   15892 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-243127"
	I1121 13:56:31.228032   15892 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-243127"
	I1121 13:56:31.228047   15892 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-243127"
	I1121 13:56:31.228052   15892 addons.go:70] Setting volumesnapshots=true in profile "addons-243127"
	I1121 13:56:31.227978   15892 mustload.go:66] Loading cluster: addons-243127
	I1121 13:56:31.228068   15892 addons.go:70] Setting registry=true in profile "addons-243127"
	I1121 13:56:31.228083   15892 addons.go:239] Setting addon registry=true in "addons-243127"
	I1121 13:56:31.228088   15892 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-243127"
	I1121 13:56:31.228107   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.228118   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.228121   15892 addons.go:239] Setting addon volumesnapshots=true in "addons-243127"
	I1121 13:56:31.228127   15892 addons.go:70] Setting cloud-spanner=true in profile "addons-243127"
	I1121 13:56:31.228141   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.228144   15892 addons.go:239] Setting addon cloud-spanner=true in "addons-243127"
	I1121 13:56:31.228181   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.228219   15892 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:56:31.228253   15892 addons.go:70] Setting metrics-server=true in profile "addons-243127"
	I1121 13:56:31.228286   15892 addons.go:239] Setting addon metrics-server=true in "addons-243127"
	I1121 13:56:31.228318   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.228048   15892 addons.go:239] Setting addon volcano=true in "addons-243127"
	I1121 13:56:31.228408   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.228482   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.228674   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.228121   15892 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-243127"
	I1121 13:56:31.228778   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.228792   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.229046   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.227950   15892 addons.go:70] Setting yakd=true in profile "addons-243127"
	I1121 13:56:31.228043   15892 addons.go:239] Setting addon inspektor-gadget=true in "addons-243127"
	I1121 13:56:31.228061   15892 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-243127"
	I1121 13:56:31.227958   15892 addons.go:70] Setting ingress=true in profile "addons-243127"
	I1121 13:56:31.228675   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.228022   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.229294   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.228674   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.229626   15892 out.go:179] * Verifying Kubernetes components...
	I1121 13:56:31.229711   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.229644   15892 addons.go:239] Setting addon yakd=true in "addons-243127"
	I1121 13:56:31.229998   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.230003   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.230490   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.228038   15892 addons.go:70] Setting default-storageclass=true in profile "addons-243127"
	I1121 13:56:31.233034   15892 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-243127"
	I1121 13:56:31.233338   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.233704   15892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 13:56:31.233822   15892 addons.go:239] Setting addon ingress=true in "addons-243127"
	I1121 13:56:31.233860   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.229626   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.234649   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.228024   15892 addons.go:239] Setting addon registry-creds=true in "addons-243127"
	I1121 13:56:31.236578   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.228026   15892 addons.go:70] Setting storage-provisioner=true in profile "addons-243127"
	I1121 13:56:31.237338   15892 addons.go:239] Setting addon storage-provisioner=true in "addons-243127"
	I1121 13:56:31.237361   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.237612   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.237842   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.238752   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.228022   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.240213   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.256946   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.294288   15892 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1121 13:56:31.294364   15892 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1121 13:56:31.295982   15892 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 13:56:31.296061   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1121 13:56:31.296034   15892 out.go:179]   - Using image docker.io/registry:3.0.0
	I1121 13:56:31.296245   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.298955   15892 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1121 13:56:31.298972   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1121 13:56:31.299021   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.310218   15892 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1121 13:56:31.311377   15892 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1121 13:56:31.312427   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1121 13:56:31.311385   15892 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1121 13:56:31.312679   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.313505   15892 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 13:56:31.313522   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1121 13:56:31.313587   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.315995   15892 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1121 13:56:31.317805   15892 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1121 13:56:31.318731   15892 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1121 13:56:31.318747   15892 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1121 13:56:31.318819   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.319357   15892 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1121 13:56:31.319371   15892 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1121 13:56:31.319506   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.320034   15892 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 13:56:31.321019   15892 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 13:56:31.321037   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 13:56:31.321083   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.321545   15892 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-243127"
	I1121 13:56:31.321642   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.322150   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.322473   15892 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1121 13:56:31.326348   15892 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1121 13:56:31.327348   15892 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1121 13:56:31.328375   15892 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1121 13:56:31.330379   15892 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1121 13:56:31.330590   15892 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1121 13:56:31.331397   15892 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 13:56:31.331416   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1121 13:56:31.331469   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.332484   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.333571   15892 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1121 13:56:31.334623   15892 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1121 13:56:31.336147   15892 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1121 13:56:31.337249   15892 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1121 13:56:31.337815   15892 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1121 13:56:31.337830   15892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1121 13:56:31.337904   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.338767   15892 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 13:56:31.338792   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1121 13:56:31.339035   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	W1121 13:56:31.343647   15892 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1121 13:56:31.350700   15892 addons.go:239] Setting addon default-storageclass=true in "addons-243127"
	I1121 13:56:31.350743   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.351262   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.368340   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.368646   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.369115   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.369287   15892 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1121 13:56:31.369352   15892 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1121 13:56:31.372227   15892 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1121 13:56:31.372249   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1121 13:56:31.372298   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.372498   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.376576   15892 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1121 13:56:31.376979   15892 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 13:56:31.377040   15892 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1121 13:56:31.378028   15892 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1121 13:56:31.378046   15892 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1121 13:56:31.378106   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.382280   15892 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 13:56:31.382610   15892 out.go:179]   - Using image docker.io/busybox:stable
	I1121 13:56:31.385186   15892 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 13:56:31.385237   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1121 13:56:31.385348   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.385598   15892 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 13:56:31.385655   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1121 13:56:31.385857   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.399732   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.399869   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.408281   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.408769   15892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 13:56:31.411274   15892 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 13:56:31.411375   15892 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 13:56:31.411474   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.411465   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.413862   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.422517   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.428924   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	W1121 13:56:31.432908   15892 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 13:56:31.432973   15892 retry.go:31] will retry after 357.87567ms: ssh: handshake failed: EOF
	I1121 13:56:31.440097   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.445769   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.448763   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.454266   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	W1121 13:56:31.456835   15892 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 13:56:31.456884   15892 retry.go:31] will retry after 156.476315ms: ssh: handshake failed: EOF
	I1121 13:56:31.460891   15892 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 13:56:31.557167   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 13:56:31.558024   15892 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1121 13:56:31.558086   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1121 13:56:31.564964   15892 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1121 13:56:31.565049   15892 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1121 13:56:31.568272   15892 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1121 13:56:31.568290   15892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1121 13:56:31.569187   15892 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1121 13:56:31.569200   15892 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1121 13:56:31.575914   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 13:56:31.577748   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 13:56:31.581650   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 13:56:31.584925   15892 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1121 13:56:31.584941   15892 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1121 13:56:31.587476   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1121 13:56:31.593026   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 13:56:31.598722   15892 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1121 13:56:31.598779   15892 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1121 13:56:31.604371   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1121 13:56:31.610281   15892 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1121 13:56:31.610299   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1121 13:56:31.612243   15892 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1121 13:56:31.612259   15892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1121 13:56:31.620306   15892 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1121 13:56:31.620323   15892 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1121 13:56:31.623257   15892 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 13:56:31.623334   15892 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1121 13:56:31.624861   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 13:56:31.647233   15892 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1121 13:56:31.647258   15892 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1121 13:56:31.655809   15892 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1121 13:56:31.655886   15892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1121 13:56:31.658906   15892 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1121 13:56:31.658979   15892 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1121 13:56:31.670662   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 13:56:31.671704   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1121 13:56:31.700275   15892 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1121 13:56:31.700300   15892 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1121 13:56:31.701033   15892 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1121 13:56:31.701094   15892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1121 13:56:31.708054   15892 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1121 13:56:31.708077   15892 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1121 13:56:31.751710   15892 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1121 13:56:31.751732   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1121 13:56:31.753543   15892 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1121 13:56:31.753612   15892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1121 13:56:31.760585   15892 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1121 13:56:31.762326   15892 node_ready.go:35] waiting up to 6m0s for node "addons-243127" to be "Ready" ...
	I1121 13:56:31.770035   15892 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 13:56:31.770439   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1121 13:56:31.814691   15892 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1121 13:56:31.814776   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1121 13:56:31.817370   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 13:56:31.820220   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 13:56:31.821409   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1121 13:56:31.869538   15892 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1121 13:56:31.869655   15892 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1121 13:56:31.922583   15892 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1121 13:56:31.922674   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1121 13:56:31.965555   15892 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1121 13:56:31.965600   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1121 13:56:32.006699   15892 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1121 13:56:32.006731   15892 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1121 13:56:32.025981   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 13:56:32.065751   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1121 13:56:32.271332   15892 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-243127" context rescaled to 1 replicas
	I1121 13:56:32.534371   15892 addons.go:495] Verifying addon metrics-server=true in "addons-243127"
	I1121 13:56:32.534422   15892 addons.go:495] Verifying addon registry=true in "addons-243127"
	I1121 13:56:32.535743   15892 out.go:179] * Verifying registry addon...
	I1121 13:56:32.538082   15892 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1121 13:56:32.540726   15892 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1121 13:56:32.540744   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:33.041211   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:33.062627   15892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.245153808s)
	I1121 13:56:33.062674   15892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.242344407s)
	W1121 13:56:33.062709   15892 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 13:56:33.062759   15892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.241201565s)
	I1121 13:56:33.062768   15892 retry.go:31] will retry after 271.534933ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 13:56:33.062896   15892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.036878015s)
	I1121 13:56:33.062919   15892 addons.go:495] Verifying addon ingress=true in "addons-243127"
	I1121 13:56:33.063134   15892 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-243127"
	I1121 13:56:33.064409   15892 out.go:179] * Verifying csi-hostpath-driver addon...
	I1121 13:56:33.064473   15892 out.go:179] * Verifying ingress addon...
	I1121 13:56:33.064482   15892 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-243127 service yakd-dashboard -n yakd-dashboard
	
	I1121 13:56:33.066480   15892 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1121 13:56:33.067201   15892 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1121 13:56:33.068620   15892 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1121 13:56:33.068638   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:33.070789   15892 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1121 13:56:33.070807   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:33.334420   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 13:56:33.541330   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:33.569224   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:33.569362   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:33.765065   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:34.040915   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:34.069106   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:34.069217   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:34.540271   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:34.569216   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:34.569295   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:35.041172   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:35.069223   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:35.069280   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:35.540607   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:35.641698   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:35.641813   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:35.772780   15892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.438316046s)
	I1121 13:56:36.041592   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:36.068618   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:36.069600   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:36.265192   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:36.540934   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:36.568722   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:36.569760   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:37.040388   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:37.069377   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:37.069446   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:37.540423   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:37.569355   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:37.569398   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:38.040259   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:38.069400   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:38.069449   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:38.540932   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:38.568774   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:38.569840   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:38.764167   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:38.937855   15892 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1121 13:56:38.937925   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:38.954895   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:39.041950   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:39.052512   15892 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1121 13:56:39.064137   15892 addons.go:239] Setting addon gcp-auth=true in "addons-243127"
	I1121 13:56:39.064183   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:39.064515   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:39.068837   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:39.069939   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:39.081463   15892 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1121 13:56:39.081508   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:39.097283   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:39.187394   15892 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 13:56:39.188465   15892 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1121 13:56:39.189730   15892 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1121 13:56:39.189746   15892 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1121 13:56:39.201780   15892 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1121 13:56:39.201795   15892 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1121 13:56:39.213710   15892 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 13:56:39.213726   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1121 13:56:39.225167   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 13:56:39.500668   15892 addons.go:495] Verifying addon gcp-auth=true in "addons-243127"
	I1121 13:56:39.501976   15892 out.go:179] * Verifying gcp-auth addon...
	I1121 13:56:39.503640   15892 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1121 13:56:39.505509   15892 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1121 13:56:39.505524   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:39.540132   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:39.568961   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:39.569056   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:40.006196   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:40.040977   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:40.068628   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:40.069779   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:40.506408   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:40.540053   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:40.568674   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:40.570033   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:40.764959   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:41.006293   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:41.040050   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:41.069111   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:41.069114   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:41.506637   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:41.540260   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:41.569361   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:41.569536   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:42.006699   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:42.040662   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:42.068394   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:42.069625   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:42.506871   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:42.540637   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:42.568462   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:42.569410   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:43.006704   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:43.040375   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:43.069308   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:43.069460   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:43.264367   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:43.506822   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:43.540630   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:43.568359   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:43.569459   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:44.006623   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:44.040438   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:44.068551   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:44.069682   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:44.507208   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:44.539979   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:44.568589   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:44.569786   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:45.005968   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:45.041164   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:45.069338   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:45.069451   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:45.265439   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:45.507492   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:45.540252   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:45.569084   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:45.569164   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:46.006761   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:46.040660   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:46.068415   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:46.069649   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:46.507019   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:46.540697   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:46.568412   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:46.569613   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:47.006619   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:47.040533   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:47.069550   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:47.069644   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:47.506621   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:47.540396   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:47.569618   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:47.569645   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 13:56:47.764461   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:48.006819   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:48.040603   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:48.068266   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:48.069409   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:48.506477   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:48.540029   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:48.568703   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:48.569924   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:49.006604   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:49.040495   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:49.068647   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:49.069660   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:49.506165   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:49.540993   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:49.568770   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:49.569821   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:49.764805   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:50.006166   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:50.040897   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:50.068617   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:50.069903   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:50.506672   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:50.540464   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:50.569360   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:50.569417   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:51.006629   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:51.040487   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:51.069323   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:51.069358   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:51.505942   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:51.540736   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:51.568462   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:51.569512   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:52.006811   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:52.040581   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:52.068290   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:52.069343   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:52.264463   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:52.506994   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:52.540626   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:52.568202   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:52.569455   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:53.006359   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:53.040138   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:53.068946   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:53.069039   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:53.506641   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:53.540253   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:53.569221   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:53.569471   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:54.006264   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:54.040116   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:54.069320   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:54.069368   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:54.265026   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:54.506815   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:54.540468   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:54.569312   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:54.569513   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:55.006486   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:55.040199   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:55.069340   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:55.069518   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:55.506169   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:55.539903   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:55.568622   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:55.569733   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:56.006106   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:56.040882   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:56.068713   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:56.069678   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:56.506326   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:56.540062   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:56.569029   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:56.569102   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:56.765153   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:57.006633   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:57.040523   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:57.069610   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:57.069646   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:57.505917   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:57.540876   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:57.568779   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:57.569936   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:58.006278   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:58.040043   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:58.068923   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:58.068967   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:58.506451   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:58.540195   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:58.569184   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:58.569339   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:59.006360   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:59.040083   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:59.069206   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:59.069319   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:59.265264   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:59.506497   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:59.540312   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:59.569320   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:59.569378   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:00.006202   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:00.039851   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:00.068729   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:00.069846   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:00.506266   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:00.540045   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:00.569046   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:00.569056   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:01.006120   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:01.039879   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:01.068680   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:01.069744   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:01.506202   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:01.539950   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:01.568729   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:01.572713   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:57:01.764891   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:57:02.006324   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:02.040288   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:02.069497   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:02.069609   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:02.505725   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:02.540575   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:02.569503   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:02.569615   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:03.006312   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:03.039905   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:03.068679   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:03.069924   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:03.506735   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:03.540475   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:03.569337   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:03.569547   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:04.007118   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:04.040899   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:04.069006   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:04.069147   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:57:04.264987   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:57:04.506578   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:04.540283   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:04.569208   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:04.569282   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:05.006295   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:05.040065   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:05.069063   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:05.069104   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:05.506782   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:05.540550   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:05.568322   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:05.569380   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:06.006188   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:06.040025   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:06.068971   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:06.069090   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:57:06.265289   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:57:06.506642   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:06.540485   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:06.569438   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:06.569536   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:07.006955   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:07.040744   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:07.068753   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:07.069839   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:07.506546   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:07.540504   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:07.568340   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:07.569441   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:08.006650   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:08.040525   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:08.069685   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:08.069761   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:08.506122   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:08.540973   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:08.568614   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:08.569924   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:57:08.764843   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:57:09.005963   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:09.040738   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:09.068944   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:09.069981   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:09.506418   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:09.540287   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:09.569186   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:09.569235   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:10.006523   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:10.040417   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:10.069466   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:10.069652   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:10.505895   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:10.540679   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:10.568475   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:10.569727   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:57:10.765389   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:57:11.005747   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:11.040536   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:11.069408   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:11.069574   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:11.506295   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:11.540206   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:11.569265   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:11.569370   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:12.008711   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:12.042353   15892 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1121 13:57:12.042379   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:12.073706   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:12.073770   15892 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1121 13:57:12.073791   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:12.264343   15892 node_ready.go:49] node "addons-243127" is "Ready"
	I1121 13:57:12.264367   15892 node_ready.go:38] duration metric: took 40.502022718s for node "addons-243127" to be "Ready" ...
	I1121 13:57:12.264380   15892 api_server.go:52] waiting for apiserver process to appear ...
	I1121 13:57:12.264431   15892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 13:57:12.280931   15892 api_server.go:72] duration metric: took 41.053141386s to wait for apiserver process to appear ...
	I1121 13:57:12.280968   15892 api_server.go:88] waiting for apiserver healthz status ...
	I1121 13:57:12.280989   15892 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1121 13:57:12.285197   15892 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1121 13:57:12.285936   15892 api_server.go:141] control plane version: v1.34.1
	I1121 13:57:12.285958   15892 api_server.go:131] duration metric: took 4.982226ms to wait for apiserver health ...
	I1121 13:57:12.285968   15892 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 13:57:12.289161   15892 system_pods.go:59] 20 kube-system pods found
	I1121 13:57:12.289189   15892 system_pods.go:61] "amd-gpu-device-plugin-rs4wk" [d044fde9-5989-433c-bea4-d92a04c49500] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1121 13:57:12.289196   15892 system_pods.go:61] "coredns-66bc5c9577-4zrd8" [d3c3cb4a-fb2e-4e66-bfcc-1627a5fd1398] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 13:57:12.289203   15892 system_pods.go:61] "csi-hostpath-attacher-0" [c3dafc99-516f-4a8e-b4f7-d89c25df4961] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:57:12.289210   15892 system_pods.go:61] "csi-hostpath-resizer-0" [e9ccb693-950c-4e61-9db5-c3b02b9c5ebb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:57:12.289219   15892 system_pods.go:61] "csi-hostpathplugin-4xdqt" [8963c4d1-c27f-4a22-8820-2ed2b0176b81] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 13:57:12.289223   15892 system_pods.go:61] "etcd-addons-243127" [48c86679-5869-44a5-8f63-9587dd40dc0c] Running
	I1121 13:57:12.289227   15892 system_pods.go:61] "kindnet-ftx9v" [c1512147-d6dc-4d1b-bc29-edeeb1276825] Running
	I1121 13:57:12.289230   15892 system_pods.go:61] "kube-apiserver-addons-243127" [b9fbd0f1-07a6-44e0-87d9-73871a7270d2] Running
	I1121 13:57:12.289235   15892 system_pods.go:61] "kube-controller-manager-addons-243127" [399be348-dc97-47a2-8417-6cfe4bfd8119] Running
	I1121 13:57:12.289241   15892 system_pods.go:61] "kube-ingress-dns-minikube" [3a1fd53e-57ea-47dc-ae8a-b853499a67b7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 13:57:12.289248   15892 system_pods.go:61] "kube-proxy-jjn5n" [855bd4fa-48bd-4288-9b4b-7672fea98a04] Running
	I1121 13:57:12.289252   15892 system_pods.go:61] "kube-scheduler-addons-243127" [d7e53232-7ae8-4cbe-8e6f-35e9c89a5144] Running
	I1121 13:57:12.289257   15892 system_pods.go:61] "metrics-server-85b7d694d7-4khd6" [9d42569c-8cf8-439e-858c-1acf1f059214] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 13:57:12.289265   15892 system_pods.go:61] "nvidia-device-plugin-daemonset-v2h2s" [915d9baa-5e34-4320-9fe6-d65726ad8bb0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 13:57:12.289270   15892 system_pods.go:61] "registry-6b586f9694-2dn55" [01d1fb94-7e93-4c68-b4a5-4a7aec2eeffb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 13:57:12.289278   15892 system_pods.go:61] "registry-creds-764b6fb674-w4fpg" [bf5b16be-3fde-465c-8a46-1b7fccb15f4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:57:12.289283   15892 system_pods.go:61] "registry-proxy-9k9gw" [7e745192-f6fe-4677-b1f9-90e0ca68e72e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 13:57:12.289289   15892 system_pods.go:61] "snapshot-controller-7d9fbc56b8-l5nct" [0f555ce4-a222-47f4-b3d7-f1f1d7e80012] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:12.289295   15892 system_pods.go:61] "snapshot-controller-7d9fbc56b8-mw8mv" [cbe59326-60f1-4141-9c9a-e2a1976c98d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:12.289303   15892 system_pods.go:61] "storage-provisioner" [30c258b4-4d04-4c2b-8635-5c9fadbed185] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 13:57:12.289308   15892 system_pods.go:74] duration metric: took 3.334503ms to wait for pod list to return data ...
	I1121 13:57:12.289318   15892 default_sa.go:34] waiting for default service account to be created ...
	I1121 13:57:12.290938   15892 default_sa.go:45] found service account: "default"
	I1121 13:57:12.290954   15892 default_sa.go:55] duration metric: took 1.632314ms for default service account to be created ...
	I1121 13:57:12.290961   15892 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 13:57:12.293632   15892 system_pods.go:86] 20 kube-system pods found
	I1121 13:57:12.293653   15892 system_pods.go:89] "amd-gpu-device-plugin-rs4wk" [d044fde9-5989-433c-bea4-d92a04c49500] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1121 13:57:12.293660   15892 system_pods.go:89] "coredns-66bc5c9577-4zrd8" [d3c3cb4a-fb2e-4e66-bfcc-1627a5fd1398] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 13:57:12.293667   15892 system_pods.go:89] "csi-hostpath-attacher-0" [c3dafc99-516f-4a8e-b4f7-d89c25df4961] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:57:12.293672   15892 system_pods.go:89] "csi-hostpath-resizer-0" [e9ccb693-950c-4e61-9db5-c3b02b9c5ebb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:57:12.293678   15892 system_pods.go:89] "csi-hostpathplugin-4xdqt" [8963c4d1-c27f-4a22-8820-2ed2b0176b81] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 13:57:12.293683   15892 system_pods.go:89] "etcd-addons-243127" [48c86679-5869-44a5-8f63-9587dd40dc0c] Running
	I1121 13:57:12.293687   15892 system_pods.go:89] "kindnet-ftx9v" [c1512147-d6dc-4d1b-bc29-edeeb1276825] Running
	I1121 13:57:12.293691   15892 system_pods.go:89] "kube-apiserver-addons-243127" [b9fbd0f1-07a6-44e0-87d9-73871a7270d2] Running
	I1121 13:57:12.293694   15892 system_pods.go:89] "kube-controller-manager-addons-243127" [399be348-dc97-47a2-8417-6cfe4bfd8119] Running
	I1121 13:57:12.293703   15892 system_pods.go:89] "kube-ingress-dns-minikube" [3a1fd53e-57ea-47dc-ae8a-b853499a67b7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 13:57:12.293707   15892 system_pods.go:89] "kube-proxy-jjn5n" [855bd4fa-48bd-4288-9b4b-7672fea98a04] Running
	I1121 13:57:12.293711   15892 system_pods.go:89] "kube-scheduler-addons-243127" [d7e53232-7ae8-4cbe-8e6f-35e9c89a5144] Running
	I1121 13:57:12.293715   15892 system_pods.go:89] "metrics-server-85b7d694d7-4khd6" [9d42569c-8cf8-439e-858c-1acf1f059214] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 13:57:12.293722   15892 system_pods.go:89] "nvidia-device-plugin-daemonset-v2h2s" [915d9baa-5e34-4320-9fe6-d65726ad8bb0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 13:57:12.293727   15892 system_pods.go:89] "registry-6b586f9694-2dn55" [01d1fb94-7e93-4c68-b4a5-4a7aec2eeffb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 13:57:12.293735   15892 system_pods.go:89] "registry-creds-764b6fb674-w4fpg" [bf5b16be-3fde-465c-8a46-1b7fccb15f4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:57:12.293741   15892 system_pods.go:89] "registry-proxy-9k9gw" [7e745192-f6fe-4677-b1f9-90e0ca68e72e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 13:57:12.293747   15892 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l5nct" [0f555ce4-a222-47f4-b3d7-f1f1d7e80012] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:12.293754   15892 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mw8mv" [cbe59326-60f1-4141-9c9a-e2a1976c98d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:12.293759   15892 system_pods.go:89] "storage-provisioner" [30c258b4-4d04-4c2b-8635-5c9fadbed185] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 13:57:12.293776   15892 retry.go:31] will retry after 203.129367ms: missing components: kube-dns
	I1121 13:57:12.500221   15892 system_pods.go:86] 20 kube-system pods found
	I1121 13:57:12.500251   15892 system_pods.go:89] "amd-gpu-device-plugin-rs4wk" [d044fde9-5989-433c-bea4-d92a04c49500] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1121 13:57:12.500260   15892 system_pods.go:89] "coredns-66bc5c9577-4zrd8" [d3c3cb4a-fb2e-4e66-bfcc-1627a5fd1398] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 13:57:12.500266   15892 system_pods.go:89] "csi-hostpath-attacher-0" [c3dafc99-516f-4a8e-b4f7-d89c25df4961] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:57:12.500272   15892 system_pods.go:89] "csi-hostpath-resizer-0" [e9ccb693-950c-4e61-9db5-c3b02b9c5ebb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:57:12.500277   15892 system_pods.go:89] "csi-hostpathplugin-4xdqt" [8963c4d1-c27f-4a22-8820-2ed2b0176b81] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 13:57:12.500285   15892 system_pods.go:89] "etcd-addons-243127" [48c86679-5869-44a5-8f63-9587dd40dc0c] Running
	I1121 13:57:12.500289   15892 system_pods.go:89] "kindnet-ftx9v" [c1512147-d6dc-4d1b-bc29-edeeb1276825] Running
	I1121 13:57:12.500292   15892 system_pods.go:89] "kube-apiserver-addons-243127" [b9fbd0f1-07a6-44e0-87d9-73871a7270d2] Running
	I1121 13:57:12.500295   15892 system_pods.go:89] "kube-controller-manager-addons-243127" [399be348-dc97-47a2-8417-6cfe4bfd8119] Running
	I1121 13:57:12.500301   15892 system_pods.go:89] "kube-ingress-dns-minikube" [3a1fd53e-57ea-47dc-ae8a-b853499a67b7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 13:57:12.500305   15892 system_pods.go:89] "kube-proxy-jjn5n" [855bd4fa-48bd-4288-9b4b-7672fea98a04] Running
	I1121 13:57:12.500310   15892 system_pods.go:89] "kube-scheduler-addons-243127" [d7e53232-7ae8-4cbe-8e6f-35e9c89a5144] Running
	I1121 13:57:12.500317   15892 system_pods.go:89] "metrics-server-85b7d694d7-4khd6" [9d42569c-8cf8-439e-858c-1acf1f059214] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 13:57:12.500327   15892 system_pods.go:89] "nvidia-device-plugin-daemonset-v2h2s" [915d9baa-5e34-4320-9fe6-d65726ad8bb0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 13:57:12.500335   15892 system_pods.go:89] "registry-6b586f9694-2dn55" [01d1fb94-7e93-4c68-b4a5-4a7aec2eeffb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 13:57:12.500340   15892 system_pods.go:89] "registry-creds-764b6fb674-w4fpg" [bf5b16be-3fde-465c-8a46-1b7fccb15f4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:57:12.500348   15892 system_pods.go:89] "registry-proxy-9k9gw" [7e745192-f6fe-4677-b1f9-90e0ca68e72e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 13:57:12.500353   15892 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l5nct" [0f555ce4-a222-47f4-b3d7-f1f1d7e80012] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:12.500358   15892 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mw8mv" [cbe59326-60f1-4141-9c9a-e2a1976c98d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:12.500363   15892 system_pods.go:89] "storage-provisioner" [30c258b4-4d04-4c2b-8635-5c9fadbed185] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 13:57:12.500380   15892 retry.go:31] will retry after 307.17282ms: missing components: kube-dns
	I1121 13:57:12.505358   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:12.599581   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:12.599584   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:12.599674   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:12.811975   15892 system_pods.go:86] 20 kube-system pods found
	I1121 13:57:12.812004   15892 system_pods.go:89] "amd-gpu-device-plugin-rs4wk" [d044fde9-5989-433c-bea4-d92a04c49500] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1121 13:57:12.812012   15892 system_pods.go:89] "coredns-66bc5c9577-4zrd8" [d3c3cb4a-fb2e-4e66-bfcc-1627a5fd1398] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 13:57:12.812018   15892 system_pods.go:89] "csi-hostpath-attacher-0" [c3dafc99-516f-4a8e-b4f7-d89c25df4961] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:57:12.812025   15892 system_pods.go:89] "csi-hostpath-resizer-0" [e9ccb693-950c-4e61-9db5-c3b02b9c5ebb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:57:12.812030   15892 system_pods.go:89] "csi-hostpathplugin-4xdqt" [8963c4d1-c27f-4a22-8820-2ed2b0176b81] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 13:57:12.812033   15892 system_pods.go:89] "etcd-addons-243127" [48c86679-5869-44a5-8f63-9587dd40dc0c] Running
	I1121 13:57:12.812037   15892 system_pods.go:89] "kindnet-ftx9v" [c1512147-d6dc-4d1b-bc29-edeeb1276825] Running
	I1121 13:57:12.812040   15892 system_pods.go:89] "kube-apiserver-addons-243127" [b9fbd0f1-07a6-44e0-87d9-73871a7270d2] Running
	I1121 13:57:12.812044   15892 system_pods.go:89] "kube-controller-manager-addons-243127" [399be348-dc97-47a2-8417-6cfe4bfd8119] Running
	I1121 13:57:12.812058   15892 system_pods.go:89] "kube-ingress-dns-minikube" [3a1fd53e-57ea-47dc-ae8a-b853499a67b7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 13:57:12.812065   15892 system_pods.go:89] "kube-proxy-jjn5n" [855bd4fa-48bd-4288-9b4b-7672fea98a04] Running
	I1121 13:57:12.812069   15892 system_pods.go:89] "kube-scheduler-addons-243127" [d7e53232-7ae8-4cbe-8e6f-35e9c89a5144] Running
	I1121 13:57:12.812073   15892 system_pods.go:89] "metrics-server-85b7d694d7-4khd6" [9d42569c-8cf8-439e-858c-1acf1f059214] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 13:57:12.812080   15892 system_pods.go:89] "nvidia-device-plugin-daemonset-v2h2s" [915d9baa-5e34-4320-9fe6-d65726ad8bb0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 13:57:12.812088   15892 system_pods.go:89] "registry-6b586f9694-2dn55" [01d1fb94-7e93-4c68-b4a5-4a7aec2eeffb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 13:57:12.812093   15892 system_pods.go:89] "registry-creds-764b6fb674-w4fpg" [bf5b16be-3fde-465c-8a46-1b7fccb15f4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:57:12.812100   15892 system_pods.go:89] "registry-proxy-9k9gw" [7e745192-f6fe-4677-b1f9-90e0ca68e72e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 13:57:12.812105   15892 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l5nct" [0f555ce4-a222-47f4-b3d7-f1f1d7e80012] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:12.812114   15892 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mw8mv" [cbe59326-60f1-4141-9c9a-e2a1976c98d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:12.812119   15892 system_pods.go:89] "storage-provisioner" [30c258b4-4d04-4c2b-8635-5c9fadbed185] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 13:57:12.812134   15892 retry.go:31] will retry after 410.270718ms: missing components: kube-dns
	I1121 13:57:13.006959   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:13.041234   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:13.069635   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:13.069662   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:13.229236   15892 system_pods.go:86] 20 kube-system pods found
	I1121 13:57:13.229274   15892 system_pods.go:89] "amd-gpu-device-plugin-rs4wk" [d044fde9-5989-433c-bea4-d92a04c49500] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1121 13:57:13.229284   15892 system_pods.go:89] "coredns-66bc5c9577-4zrd8" [d3c3cb4a-fb2e-4e66-bfcc-1627a5fd1398] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 13:57:13.229295   15892 system_pods.go:89] "csi-hostpath-attacher-0" [c3dafc99-516f-4a8e-b4f7-d89c25df4961] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:57:13.229303   15892 system_pods.go:89] "csi-hostpath-resizer-0" [e9ccb693-950c-4e61-9db5-c3b02b9c5ebb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:57:13.229312   15892 system_pods.go:89] "csi-hostpathplugin-4xdqt" [8963c4d1-c27f-4a22-8820-2ed2b0176b81] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 13:57:13.229318   15892 system_pods.go:89] "etcd-addons-243127" [48c86679-5869-44a5-8f63-9587dd40dc0c] Running
	I1121 13:57:13.229324   15892 system_pods.go:89] "kindnet-ftx9v" [c1512147-d6dc-4d1b-bc29-edeeb1276825] Running
	I1121 13:57:13.229329   15892 system_pods.go:89] "kube-apiserver-addons-243127" [b9fbd0f1-07a6-44e0-87d9-73871a7270d2] Running
	I1121 13:57:13.229335   15892 system_pods.go:89] "kube-controller-manager-addons-243127" [399be348-dc97-47a2-8417-6cfe4bfd8119] Running
	I1121 13:57:13.229345   15892 system_pods.go:89] "kube-ingress-dns-minikube" [3a1fd53e-57ea-47dc-ae8a-b853499a67b7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 13:57:13.229350   15892 system_pods.go:89] "kube-proxy-jjn5n" [855bd4fa-48bd-4288-9b4b-7672fea98a04] Running
	I1121 13:57:13.229355   15892 system_pods.go:89] "kube-scheduler-addons-243127" [d7e53232-7ae8-4cbe-8e6f-35e9c89a5144] Running
	I1121 13:57:13.229362   15892 system_pods.go:89] "metrics-server-85b7d694d7-4khd6" [9d42569c-8cf8-439e-858c-1acf1f059214] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 13:57:13.229370   15892 system_pods.go:89] "nvidia-device-plugin-daemonset-v2h2s" [915d9baa-5e34-4320-9fe6-d65726ad8bb0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 13:57:13.229378   15892 system_pods.go:89] "registry-6b586f9694-2dn55" [01d1fb94-7e93-4c68-b4a5-4a7aec2eeffb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 13:57:13.229395   15892 system_pods.go:89] "registry-creds-764b6fb674-w4fpg" [bf5b16be-3fde-465c-8a46-1b7fccb15f4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:57:13.229402   15892 system_pods.go:89] "registry-proxy-9k9gw" [7e745192-f6fe-4677-b1f9-90e0ca68e72e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 13:57:13.229410   15892 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l5nct" [0f555ce4-a222-47f4-b3d7-f1f1d7e80012] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:13.229419   15892 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mw8mv" [cbe59326-60f1-4141-9c9a-e2a1976c98d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:13.229424   15892 system_pods.go:89] "storage-provisioner" [30c258b4-4d04-4c2b-8635-5c9fadbed185] Running
	I1121 13:57:13.229441   15892 retry.go:31] will retry after 389.657899ms: missing components: kube-dns
	I1121 13:57:13.507378   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:13.541866   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:13.572015   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:13.572221   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:13.624636   15892 system_pods.go:86] 20 kube-system pods found
	I1121 13:57:13.624667   15892 system_pods.go:89] "amd-gpu-device-plugin-rs4wk" [d044fde9-5989-433c-bea4-d92a04c49500] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1121 13:57:13.624675   15892 system_pods.go:89] "coredns-66bc5c9577-4zrd8" [d3c3cb4a-fb2e-4e66-bfcc-1627a5fd1398] Running
	I1121 13:57:13.624686   15892 system_pods.go:89] "csi-hostpath-attacher-0" [c3dafc99-516f-4a8e-b4f7-d89c25df4961] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:57:13.624694   15892 system_pods.go:89] "csi-hostpath-resizer-0" [e9ccb693-950c-4e61-9db5-c3b02b9c5ebb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:57:13.624703   15892 system_pods.go:89] "csi-hostpathplugin-4xdqt" [8963c4d1-c27f-4a22-8820-2ed2b0176b81] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 13:57:13.624719   15892 system_pods.go:89] "etcd-addons-243127" [48c86679-5869-44a5-8f63-9587dd40dc0c] Running
	I1121 13:57:13.624725   15892 system_pods.go:89] "kindnet-ftx9v" [c1512147-d6dc-4d1b-bc29-edeeb1276825] Running
	I1121 13:57:13.624730   15892 system_pods.go:89] "kube-apiserver-addons-243127" [b9fbd0f1-07a6-44e0-87d9-73871a7270d2] Running
	I1121 13:57:13.624735   15892 system_pods.go:89] "kube-controller-manager-addons-243127" [399be348-dc97-47a2-8417-6cfe4bfd8119] Running
	I1121 13:57:13.624743   15892 system_pods.go:89] "kube-ingress-dns-minikube" [3a1fd53e-57ea-47dc-ae8a-b853499a67b7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 13:57:13.624747   15892 system_pods.go:89] "kube-proxy-jjn5n" [855bd4fa-48bd-4288-9b4b-7672fea98a04] Running
	I1121 13:57:13.624753   15892 system_pods.go:89] "kube-scheduler-addons-243127" [d7e53232-7ae8-4cbe-8e6f-35e9c89a5144] Running
	I1121 13:57:13.624760   15892 system_pods.go:89] "metrics-server-85b7d694d7-4khd6" [9d42569c-8cf8-439e-858c-1acf1f059214] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 13:57:13.624768   15892 system_pods.go:89] "nvidia-device-plugin-daemonset-v2h2s" [915d9baa-5e34-4320-9fe6-d65726ad8bb0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 13:57:13.624778   15892 system_pods.go:89] "registry-6b586f9694-2dn55" [01d1fb94-7e93-4c68-b4a5-4a7aec2eeffb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 13:57:13.624785   15892 system_pods.go:89] "registry-creds-764b6fb674-w4fpg" [bf5b16be-3fde-465c-8a46-1b7fccb15f4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:57:13.624792   15892 system_pods.go:89] "registry-proxy-9k9gw" [7e745192-f6fe-4677-b1f9-90e0ca68e72e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 13:57:13.624803   15892 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l5nct" [0f555ce4-a222-47f4-b3d7-f1f1d7e80012] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:13.624812   15892 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mw8mv" [cbe59326-60f1-4141-9c9a-e2a1976c98d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:13.624818   15892 system_pods.go:89] "storage-provisioner" [30c258b4-4d04-4c2b-8635-5c9fadbed185] Running
	I1121 13:57:13.624828   15892 system_pods.go:126] duration metric: took 1.333860578s to wait for k8s-apps to be running ...
	I1121 13:57:13.624837   15892 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 13:57:13.624886   15892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 13:57:13.644018   15892 system_svc.go:56] duration metric: took 19.174078ms WaitForService to wait for kubelet
	I1121 13:57:13.644045   15892 kubeadm.go:587] duration metric: took 42.416259147s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 13:57:13.644066   15892 node_conditions.go:102] verifying NodePressure condition ...
	I1121 13:57:13.647022   15892 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 13:57:13.647052   15892 node_conditions.go:123] node cpu capacity is 8
	I1121 13:57:13.647065   15892 node_conditions.go:105] duration metric: took 2.993925ms to run NodePressure ...
	I1121 13:57:13.647079   15892 start.go:242] waiting for startup goroutines ...
	I1121 13:57:14.007531   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:14.041195   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:14.069914   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:14.070129   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:14.507219   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:14.540409   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:14.570081   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:14.570246   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:15.007015   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:15.041356   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:15.070069   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:15.070157   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:15.506833   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:15.541264   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:15.569754   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:15.569777   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:16.007862   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:16.041344   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:16.070150   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:16.070277   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:16.507506   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:16.541165   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:16.569955   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:16.570111   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:17.006460   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:17.040507   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:17.069781   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:17.069833   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:17.506976   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:17.541409   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:17.569858   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:17.570000   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:18.007356   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:18.041967   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:18.070427   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:18.070441   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:18.507233   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:18.540713   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:18.569408   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:18.570156   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:19.007091   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:19.041684   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:19.069555   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:19.069936   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:19.507127   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:19.541706   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:19.570059   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:19.570413   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:20.007133   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:20.117216   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:20.117532   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:20.117543   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:20.506846   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:20.608069   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:20.608160   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:20.608196   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:21.006700   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:21.041087   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:21.070130   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:21.070215   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:21.507265   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:21.540789   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:21.570150   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:21.570345   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:22.006300   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:22.040180   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:22.069428   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:22.069551   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:22.507654   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:22.608249   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:22.608368   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:22.608394   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:23.007213   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:23.040512   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:23.070179   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:23.070200   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:23.507129   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:23.541456   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:23.570175   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:23.570386   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:24.006724   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:24.040447   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:24.068424   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:24.069685   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:24.507086   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:24.608273   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:24.608365   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:24.608597   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:25.006516   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:25.040379   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:25.069461   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:25.069639   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:25.506270   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:25.540946   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:25.569520   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:25.570050   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:26.007045   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:26.041611   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:26.069305   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:26.069813   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:26.507041   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:26.607931   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:26.607985   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:26.608161   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:27.006778   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:27.040782   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:27.069657   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:27.070156   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:27.507266   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:27.540321   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:27.608473   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:27.608525   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:28.007419   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:28.041094   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:28.069873   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:28.069877   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:28.506430   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:28.540283   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:28.608155   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:28.608169   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:29.006946   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:29.041415   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:29.070205   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:29.070399   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:29.507263   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:29.540300   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:29.569880   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:29.570125   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:30.006231   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:30.040716   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:30.069378   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:30.069890   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:30.506352   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:30.540255   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:30.569675   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:30.569732   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:31.006405   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:31.040968   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:31.069858   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:31.070432   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:31.540742   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:31.541346   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:31.643471   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:31.643544   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:32.006046   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:32.041290   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:32.070253   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:32.070584   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:32.507494   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:32.541245   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:32.608702   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:32.608848   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:33.006332   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:33.040369   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:33.069422   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:33.069494   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:33.506883   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:33.540586   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:33.569795   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:33.569834   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:34.005857   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:34.040852   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:34.069222   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:34.070164   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:34.507177   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:34.541404   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:34.569679   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:34.571006   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:35.006757   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:35.040995   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:35.069778   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:35.070210   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:35.507430   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:35.540697   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:35.568809   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:35.569771   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:36.006625   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:36.041077   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:36.069864   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:36.069901   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:36.506641   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:36.540774   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:36.569158   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:36.570158   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:37.006662   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:37.040495   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:37.069234   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:37.069472   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:37.507677   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:37.542023   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:37.570061   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:37.570262   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:38.007216   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:38.040614   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:38.068979   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:38.069772   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:38.506266   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:38.540042   15892 kapi.go:107] duration metric: took 1m6.00195971s to wait for kubernetes.io/minikube-addons=registry ...
	I1121 13:57:38.570550   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:38.570737   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:39.007599   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:39.071404   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:39.071606   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:39.506930   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:39.569379   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:39.570130   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:40.007228   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:40.070018   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:40.070115   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:40.507081   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:40.570150   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:40.570154   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:41.007427   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:41.070753   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:41.070915   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:41.506341   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:41.569422   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:41.569683   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:42.007490   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:42.070306   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:42.070336   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:42.507109   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:42.569729   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:42.569821   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:43.006108   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:43.070026   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:43.070040   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:43.506938   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:43.569901   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:43.570305   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:44.006902   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:44.107513   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:44.107555   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:44.506238   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:44.569675   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:44.569757   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:45.006325   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:45.069751   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:45.069822   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:45.536341   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:45.569925   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:45.570030   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:46.007992   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:46.069960   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:46.070446   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:46.507764   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:46.569388   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:46.569480   15892 kapi.go:107] duration metric: took 1m13.502997909s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1121 13:57:47.008789   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:47.109632   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:47.507937   15892 kapi.go:107] duration metric: took 1m8.004292079s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1121 13:57:47.509735   15892 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-243127 cluster.
	I1121 13:57:47.511518   15892 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1121 13:57:47.512755   15892 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1121 13:57:47.571759   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:48.070657   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:48.571256   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:49.070790   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:49.571394   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:50.070216   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:50.569617   15892 kapi.go:107] duration metric: took 1m17.502411321s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1121 13:57:50.570986   15892 out.go:179] * Enabled addons: amd-gpu-device-plugin, registry-creds, storage-provisioner, inspektor-gadget, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, storage-provisioner-rancher, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1121 13:57:50.572035   15892 addons.go:530] duration metric: took 1m19.344223281s for enable addons: enabled=[amd-gpu-device-plugin registry-creds storage-provisioner inspektor-gadget nvidia-device-plugin cloud-spanner ingress-dns metrics-server storage-provisioner-rancher yakd default-storageclass volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1121 13:57:50.572070   15892 start.go:247] waiting for cluster config update ...
	I1121 13:57:50.572088   15892 start.go:256] writing updated cluster config ...
	I1121 13:57:50.572299   15892 ssh_runner.go:195] Run: rm -f paused
	I1121 13:57:50.575859   15892 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 13:57:50.578380   15892 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4zrd8" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:50.581628   15892 pod_ready.go:94] pod "coredns-66bc5c9577-4zrd8" is "Ready"
	I1121 13:57:50.581646   15892 pod_ready.go:86] duration metric: took 3.247968ms for pod "coredns-66bc5c9577-4zrd8" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:50.583155   15892 pod_ready.go:83] waiting for pod "etcd-addons-243127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:50.586203   15892 pod_ready.go:94] pod "etcd-addons-243127" is "Ready"
	I1121 13:57:50.586219   15892 pod_ready.go:86] duration metric: took 3.049799ms for pod "etcd-addons-243127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:50.587673   15892 pod_ready.go:83] waiting for pod "kube-apiserver-addons-243127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:50.590771   15892 pod_ready.go:94] pod "kube-apiserver-addons-243127" is "Ready"
	I1121 13:57:50.590791   15892 pod_ready.go:86] duration metric: took 3.100408ms for pod "kube-apiserver-addons-243127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:50.592116   15892 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-243127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:50.979548   15892 pod_ready.go:94] pod "kube-controller-manager-addons-243127" is "Ready"
	I1121 13:57:50.979587   15892 pod_ready.go:86] duration metric: took 387.452087ms for pod "kube-controller-manager-addons-243127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:51.179410   15892 pod_ready.go:83] waiting for pod "kube-proxy-jjn5n" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:51.585535   15892 pod_ready.go:94] pod "kube-proxy-jjn5n" is "Ready"
	I1121 13:57:51.585610   15892 pod_ready.go:86] duration metric: took 406.173308ms for pod "kube-proxy-jjn5n" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:51.779110   15892 pod_ready.go:83] waiting for pod "kube-scheduler-addons-243127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:52.179087   15892 pod_ready.go:94] pod "kube-scheduler-addons-243127" is "Ready"
	I1121 13:57:52.179112   15892 pod_ready.go:86] duration metric: took 399.979903ms for pod "kube-scheduler-addons-243127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:52.179122   15892 pod_ready.go:40] duration metric: took 1.603241395s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 13:57:52.221722   15892 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 13:57:52.224392   15892 out.go:179] * Done! kubectl is now configured to use "addons-243127" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 14:00:33 addons-243127 crio[766]: time="2025-11-21T14:00:33.75094719Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-44t27/POD" id=9531b0a4-0f6f-4573-a29d-7da5e1bf65eb name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:00:33 addons-243127 crio[766]: time="2025-11-21T14:00:33.751029433Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:00:33 addons-243127 crio[766]: time="2025-11-21T14:00:33.757899195Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-44t27 Namespace:default ID:2bf139b457dfff93cc9d761c0ed4e1739c8e72fa21eaaa0aedd7679f98b9ab5f UID:227b38f2-0ee1-43c8-a92d-03dab9d66c15 NetNS:/var/run/netns/b71f5bd7-1b6c-41c6-a2a2-27e4f2bd0916 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00029caa0}] Aliases:map[]}"
	Nov 21 14:00:33 addons-243127 crio[766]: time="2025-11-21T14:00:33.757932001Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-44t27 to CNI network \"kindnet\" (type=ptp)"
	Nov 21 14:00:33 addons-243127 crio[766]: time="2025-11-21T14:00:33.768403861Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-44t27 Namespace:default ID:2bf139b457dfff93cc9d761c0ed4e1739c8e72fa21eaaa0aedd7679f98b9ab5f UID:227b38f2-0ee1-43c8-a92d-03dab9d66c15 NetNS:/var/run/netns/b71f5bd7-1b6c-41c6-a2a2-27e4f2bd0916 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00029caa0}] Aliases:map[]}"
	Nov 21 14:00:33 addons-243127 crio[766]: time="2025-11-21T14:00:33.76852609Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-44t27 for CNI network kindnet (type=ptp)"
	Nov 21 14:00:33 addons-243127 crio[766]: time="2025-11-21T14:00:33.769344235Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 21 14:00:33 addons-243127 crio[766]: time="2025-11-21T14:00:33.770133104Z" level=info msg="Ran pod sandbox 2bf139b457dfff93cc9d761c0ed4e1739c8e72fa21eaaa0aedd7679f98b9ab5f with infra container: default/hello-world-app-5d498dc89-44t27/POD" id=9531b0a4-0f6f-4573-a29d-7da5e1bf65eb name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:00:33 addons-243127 crio[766]: time="2025-11-21T14:00:33.77128974Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=a30e458a-811b-496c-9d6a-3489eba73399 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:00:33 addons-243127 crio[766]: time="2025-11-21T14:00:33.771382082Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=a30e458a-811b-496c-9d6a-3489eba73399 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:00:33 addons-243127 crio[766]: time="2025-11-21T14:00:33.771409175Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=a30e458a-811b-496c-9d6a-3489eba73399 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:00:33 addons-243127 crio[766]: time="2025-11-21T14:00:33.772012275Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=b44be455-5db3-4db5-8be8-79e447ec49e9 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:00:33 addons-243127 crio[766]: time="2025-11-21T14:00:33.77629264Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 21 14:00:34 addons-243127 crio[766]: time="2025-11-21T14:00:34.588024497Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=b44be455-5db3-4db5-8be8-79e447ec49e9 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:00:34 addons-243127 crio[766]: time="2025-11-21T14:00:34.588485575Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=813e1bf2-e3ca-4ec8-b9e8-1c81e5116479 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:00:34 addons-243127 crio[766]: time="2025-11-21T14:00:34.590042951Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d177b550-8cbd-4f30-90d3-90857d28b843 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:00:34 addons-243127 crio[766]: time="2025-11-21T14:00:34.594590076Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-44t27/hello-world-app" id=2a8441bc-e896-47aa-8b64-cbe5c3e20ef1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:00:34 addons-243127 crio[766]: time="2025-11-21T14:00:34.594707587Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:00:34 addons-243127 crio[766]: time="2025-11-21T14:00:34.600515862Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:00:34 addons-243127 crio[766]: time="2025-11-21T14:00:34.600698825Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/822ffab28f9fe936daaae0de28dce4a366a53fed66df8b85f2b1220b0acc14ad/merged/etc/passwd: no such file or directory"
	Nov 21 14:00:34 addons-243127 crio[766]: time="2025-11-21T14:00:34.60072291Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/822ffab28f9fe936daaae0de28dce4a366a53fed66df8b85f2b1220b0acc14ad/merged/etc/group: no such file or directory"
	Nov 21 14:00:34 addons-243127 crio[766]: time="2025-11-21T14:00:34.600916078Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:00:34 addons-243127 crio[766]: time="2025-11-21T14:00:34.632432603Z" level=info msg="Created container e9938ea77f71f48aedd3ff9c18f867d98c0716d2e1643f8dd0dbef28b164d2a5: default/hello-world-app-5d498dc89-44t27/hello-world-app" id=2a8441bc-e896-47aa-8b64-cbe5c3e20ef1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:00:34 addons-243127 crio[766]: time="2025-11-21T14:00:34.633019923Z" level=info msg="Starting container: e9938ea77f71f48aedd3ff9c18f867d98c0716d2e1643f8dd0dbef28b164d2a5" id=7310988a-09b6-4da8-a996-4dd8bbd921b4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:00:34 addons-243127 crio[766]: time="2025-11-21T14:00:34.634645332Z" level=info msg="Started container" PID=9816 containerID=e9938ea77f71f48aedd3ff9c18f867d98c0716d2e1643f8dd0dbef28b164d2a5 description=default/hello-world-app-5d498dc89-44t27/hello-world-app id=7310988a-09b6-4da8-a996-4dd8bbd921b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2bf139b457dfff93cc9d761c0ed4e1739c8e72fa21eaaa0aedd7679f98b9ab5f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	e9938ea77f71f       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   2bf139b457dff       hello-world-app-5d498dc89-44t27            default
	0b9bbd56ef6c7       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             2 minutes ago            Running             registry-creds                           0                   ca21eaac32980       registry-creds-764b6fb674-w4fpg            kube-system
	de254707c18ce       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago            Running             nginx                                    0                   b6d57424a6137       nginx                                      default
	01ebaf6902236       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   a6471a4f996f0       busybox                                    default
	ccc5c287b598d       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             2 minutes ago            Running             controller                               0                   d9050b5400ecb       ingress-nginx-controller-6c8bf45fb-ztr8c   ingress-nginx
	9dfbb58800a98       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   cf6b4c497d512       gcp-auth-78565c9fb4-996c5                  gcp-auth
	024fd155c71f4       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   7f0f3547e5b3f       csi-hostpathplugin-4xdqt                   kube-system
	ea832cf9137b5       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             2 minutes ago            Exited              patch                                    2                   afa676fb164d3       ingress-nginx-admission-patch-skmg2        ingress-nginx
	a45184ba10995       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   7f0f3547e5b3f       csi-hostpathplugin-4xdqt                   kube-system
	c7cea569e79d9       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   7f0f3547e5b3f       csi-hostpathplugin-4xdqt                   kube-system
	f87ee9ca1eb0d       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   7f0f3547e5b3f       csi-hostpathplugin-4xdqt                   kube-system
	b69dec080641a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   7f0f3547e5b3f       csi-hostpathplugin-4xdqt                   kube-system
	9298da412a875       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            2 minutes ago            Running             gadget                                   0                   b0b9918aa76e1       gadget-hm6p2                               gadget
	733ab2d4f270d       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago            Running             registry-proxy                           0                   8c9a9a16b0b64       registry-proxy-9k9gw                       kube-system
	6d89596515e60       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   2 minutes ago            Running             csi-external-health-monitor-controller   0                   7f0f3547e5b3f       csi-hostpathplugin-4xdqt                   kube-system
	12a7ad65b01f3       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     2 minutes ago            Running             nvidia-device-plugin-ctr                 0                   d82c842fe855b       nvidia-device-plugin-daemonset-v2h2s       kube-system
	d4c89f2bb2211       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago            Exited              create                                   0                   788e7d5e412cb       ingress-nginx-admission-create-l9vg2       ingress-nginx
	3179500ac7719       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   1452bb4294121       snapshot-controller-7d9fbc56b8-l5nct       kube-system
	994774c9ca4f6       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   21bd0b917b208       csi-hostpath-attacher-0                    kube-system
	32c8ebd634641       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   bb211bfd6fb89       amd-gpu-device-plugin-rs4wk                kube-system
	56057aee31072       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   61805deadcee8       metrics-server-85b7d694d7-4khd6            kube-system
	77eb40d30250c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   f678007ed80fe       snapshot-controller-7d9fbc56b8-mw8mv       kube-system
	7d1e97c795b00       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   70f920cc3b876       csi-hostpath-resizer-0                     kube-system
	e40f4e5a20c27       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               3 minutes ago            Running             cloud-spanner-emulator                   0                   5b463692184c5       cloud-spanner-emulator-6f9fcf858b-mpbzj    default
	f49ed0d95068e       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   ef4c49a25966c       local-path-provisioner-648f6765c9-k4mfq    local-path-storage
	b3df341d90d52       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   a6bfd5f6b61bc       registry-6b586f9694-2dn55                  kube-system
	545f7855f22e4       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   299ee5353cffc       yakd-dashboard-5ff678cb9-mwqnd             yakd-dashboard
	a862b6c84241d       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   b52ff90310e1c       kube-ingress-dns-minikube                  kube-system
	7d8b7a3c495d7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   94761203a3a56       storage-provisioner                        kube-system
	8a5df4965546d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   27999ed8852c0       coredns-66bc5c9577-4zrd8                   kube-system
	7fb6fcbbcafef       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago            Running             kube-proxy                               0                   2e3bd4f96a429       kube-proxy-jjn5n                           kube-system
	66f416a261181       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   e2c7255a2a076       kindnet-ftx9v                              kube-system
	b49596a0b2d4d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago            Running             kube-controller-manager                  0                   80505f572dcc4       kube-controller-manager-addons-243127      kube-system
	6bc0a23d21b59       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago            Running             kube-apiserver                           0                   bb614fdcf0ded       kube-apiserver-addons-243127               kube-system
	61ca322c06942       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago            Running             etcd                                     0                   cc8a92808ab8f       etcd-addons-243127                         kube-system
	19610d1d8120b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago            Running             kube-scheduler                           0                   b9bc571b017cd       kube-scheduler-addons-243127               kube-system
	
	
	==> coredns [8a5df4965546d743393a4b617fc40ffb5e887a5847cf860fc0a2bf8ca9d53262] <==
	[INFO] 10.244.0.22:51798 - 16114 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004837409s
	[INFO] 10.244.0.22:41783 - 7225 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004018615s
	[INFO] 10.244.0.22:41699 - 16665 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005204606s
	[INFO] 10.244.0.22:49251 - 45049 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005487082s
	[INFO] 10.244.0.22:51551 - 34388 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006091221s
	[INFO] 10.244.0.22:56289 - 57052 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000994965s
	[INFO] 10.244.0.22:40674 - 42662 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001996899s
	[INFO] 10.244.0.25:44764 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001415537s
	[INFO] 10.244.0.25:48598 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00015989s
	[INFO] 10.244.0.28:58199 - 28327 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000215886s
	[INFO] 10.244.0.28:54537 - 57652 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000274327s
	[INFO] 10.244.0.28:40169 - 54665 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000121335s
	[INFO] 10.244.0.28:55827 - 55886 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000173082s
	[INFO] 10.244.0.28:52520 - 12467 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000132651s
	[INFO] 10.244.0.28:43529 - 45839 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000170279s
	[INFO] 10.244.0.28:34289 - 2356 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003508601s
	[INFO] 10.244.0.28:51584 - 24604 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.004938209s
	[INFO] 10.244.0.28:60450 - 49345 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.006033599s
	[INFO] 10.244.0.28:39611 - 27806 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.008049555s
	[INFO] 10.244.0.28:42739 - 48709 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004683774s
	[INFO] 10.244.0.28:59277 - 21460 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005577375s
	[INFO] 10.244.0.28:47629 - 6143 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005199329s
	[INFO] 10.244.0.28:49018 - 62778 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005438585s
	[INFO] 10.244.0.28:37374 - 64059 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001884139s
	[INFO] 10.244.0.28:52980 - 50592 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.002175181s
	
	
	==> describe nodes <==
	Name:               addons-243127
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-243127
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=addons-243127
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T13_56_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-243127
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-243127"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 13:56:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-243127
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:00:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:00:30 +0000   Fri, 21 Nov 2025 13:56:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:00:30 +0000   Fri, 21 Nov 2025 13:56:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:00:30 +0000   Fri, 21 Nov 2025 13:56:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:00:30 +0000   Fri, 21 Nov 2025 13:57:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-243127
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                93b2b064-c0a3-4ff3-b97c-0aadda05f1d2
	  Boot ID:                    4deed74e-d1ae-403f-b303-7338c681df31
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m42s
	  default                     cloud-spanner-emulator-6f9fcf858b-mpbzj     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  default                     hello-world-app-5d498dc89-44t27             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-hm6p2                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  gcp-auth                    gcp-auth-78565c9fb4-996c5                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-ztr8c    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m2s
	  kube-system                 amd-gpu-device-plugin-rs4wk                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kube-system                 coredns-66bc5c9577-4zrd8                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m4s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 csi-hostpathplugin-4xdqt                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kube-system                 etcd-addons-243127                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m9s
	  kube-system                 kindnet-ftx9v                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m4s
	  kube-system                 kube-apiserver-addons-243127                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-controller-manager-addons-243127       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-proxy-jjn5n                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-addons-243127                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 metrics-server-85b7d694d7-4khd6             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m2s
	  kube-system                 nvidia-device-plugin-daemonset-v2h2s        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kube-system                 registry-6b586f9694-2dn55                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 registry-creds-764b6fb674-w4fpg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 registry-proxy-9k9gw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  kube-system                 snapshot-controller-7d9fbc56b8-l5nct        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 snapshot-controller-7d9fbc56b8-mw8mv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  local-path-storage          local-path-provisioner-648f6765c9-k4mfq     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-mwqnd              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  Starting                 4m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m14s (x8 over 4m14s)  kubelet          Node addons-243127 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s (x8 over 4m14s)  kubelet          Node addons-243127 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s (x8 over 4m14s)  kubelet          Node addons-243127 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m9s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s                   kubelet          Node addons-243127 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s                   kubelet          Node addons-243127 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s                   kubelet          Node addons-243127 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m5s                   node-controller  Node addons-243127 event: Registered Node addons-243127 in Controller
	  Normal  NodeReady                3m23s                  kubelet          Node addons-243127 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.087005] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024870] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.407014] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 13:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.052324] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023964] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +2.047715] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +4.031557] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +8.511113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[ +16.382337] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[Nov21 13:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	
	
	==> etcd [61ca322c069424488425150fa2cb0785f8dae51cb585322a28bd81a85d0796f1] <==
	{"level":"warn","ts":"2025-11-21T13:56:22.252704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.258063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.264248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.269703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.275974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.282886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.289089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.294678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.300302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.305936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.311746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.317402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.331524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.337244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.343519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.387766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:33.547857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:59.753261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:59.760087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:59.774413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:59.780610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58540","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-21T13:57:19.926728Z","caller":"traceutil/trace.go:172","msg":"trace[1719754312] transaction","detail":"{read_only:false; response_revision:992; number_of_response:1; }","duration":"115.338233ms","start":"2025-11-21T13:57:19.811367Z","end":"2025-11-21T13:57:19.926705Z","steps":["trace[1719754312] 'process raft request'  (duration: 61.395326ms)","trace[1719754312] 'compare'  (duration: 53.879922ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-21T13:57:31.395119Z","caller":"traceutil/trace.go:172","msg":"trace[1058167776] transaction","detail":"{read_only:false; response_revision:1068; number_of_response:1; }","duration":"104.795927ms","start":"2025-11-21T13:57:31.290306Z","end":"2025-11-21T13:57:31.395102Z","steps":["trace[1058167776] 'process raft request'  (duration: 104.707018ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T13:57:31.539346Z","caller":"traceutil/trace.go:172","msg":"trace[2071703886] transaction","detail":"{read_only:false; response_revision:1073; number_of_response:1; }","duration":"128.476604ms","start":"2025-11-21T13:57:31.410850Z","end":"2025-11-21T13:57:31.539327Z","steps":["trace[2071703886] 'process raft request'  (duration: 85.698692ms)","trace[2071703886] 'compare'  (duration: 42.651001ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-21T13:58:47.530600Z","caller":"traceutil/trace.go:172","msg":"trace[452937712] transaction","detail":"{read_only:false; response_revision:1545; number_of_response:1; }","duration":"117.00131ms","start":"2025-11-21T13:58:47.413551Z","end":"2025-11-21T13:58:47.530552Z","steps":["trace[452937712] 'process raft request'  (duration: 62.743654ms)","trace[452937712] 'compare'  (duration: 54.097158ms)"],"step_count":2}
	
	
	==> gcp-auth [9dfbb58800a985a6ac2bc0a3743b7e1c8a75e2fe19948e9397ed4fd00a7eb867] <==
	2025/11/21 13:57:46 GCP Auth Webhook started!
	2025/11/21 13:57:52 Ready to marshal response ...
	2025/11/21 13:57:52 Ready to write response ...
	2025/11/21 13:57:52 Ready to marshal response ...
	2025/11/21 13:57:52 Ready to write response ...
	2025/11/21 13:57:52 Ready to marshal response ...
	2025/11/21 13:57:52 Ready to write response ...
	2025/11/21 13:58:06 Ready to marshal response ...
	2025/11/21 13:58:06 Ready to write response ...
	2025/11/21 13:58:11 Ready to marshal response ...
	2025/11/21 13:58:11 Ready to write response ...
	2025/11/21 13:58:12 Ready to marshal response ...
	2025/11/21 13:58:12 Ready to write response ...
	2025/11/21 13:58:14 Ready to marshal response ...
	2025/11/21 13:58:14 Ready to write response ...
	2025/11/21 13:58:14 Ready to marshal response ...
	2025/11/21 13:58:14 Ready to write response ...
	2025/11/21 13:58:23 Ready to marshal response ...
	2025/11/21 13:58:23 Ready to write response ...
	2025/11/21 13:58:28 Ready to marshal response ...
	2025/11/21 13:58:28 Ready to write response ...
	2025/11/21 14:00:33 Ready to marshal response ...
	2025/11/21 14:00:33 Ready to write response ...
	
	
	==> kernel <==
	 14:00:35 up 43 min,  0 user,  load average: 0.42, 0.89, 0.46
	Linux addons-243127 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [66f416a2611818263d3d82ca7e1c1dc51f8f8373c1ef7e282ba527f4ea204081] <==
	I1121 13:58:31.455267       1 main.go:301] handling current node
	I1121 13:58:41.455571       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:58:41.455629       1 main.go:301] handling current node
	I1121 13:58:51.455046       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:58:51.455076       1 main.go:301] handling current node
	I1121 13:59:01.455392       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:59:01.455417       1 main.go:301] handling current node
	I1121 13:59:11.455367       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:59:11.455397       1 main.go:301] handling current node
	I1121 13:59:21.455137       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:59:21.455166       1 main.go:301] handling current node
	I1121 13:59:31.456629       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:59:31.456657       1 main.go:301] handling current node
	I1121 13:59:41.454466       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:59:41.454504       1 main.go:301] handling current node
	I1121 13:59:51.455339       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:59:51.455369       1 main.go:301] handling current node
	I1121 14:00:01.455110       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:00:01.455136       1 main.go:301] handling current node
	I1121 14:00:11.460122       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:00:11.460159       1 main.go:301] handling current node
	I1121 14:00:21.461177       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:00:21.461203       1 main.go:301] handling current node
	I1121 14:00:31.455407       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:00:31.455429       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6bc0a23d21b59305e5afddf76ffb436c468bd915a8bb02d9d71592825dafd624] <==
	 > logger="UnhandledError"
	E1121 13:57:31.548877       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.72.73:443: connect: connection refused" logger="UnhandledError"
	E1121 13:57:31.550590       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.72.73:443: connect: connection refused" logger="UnhandledError"
	E1121 13:57:31.555978       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.72.73:443: connect: connection refused" logger="UnhandledError"
	E1121 13:57:31.577158       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.72.73:443: connect: connection refused" logger="UnhandledError"
	E1121 13:57:31.618202       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.72.73:443: connect: connection refused" logger="UnhandledError"
	E1121 13:57:31.699365       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.72.73:443: connect: connection refused" logger="UnhandledError"
	E1121 13:57:31.860143       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.72.73:443: connect: connection refused" logger="UnhandledError"
	E1121 13:57:32.181258       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.72.73:443: connect: connection refused" logger="UnhandledError"
	W1121 13:57:32.549117       1 handler_proxy.go:99] no RequestInfo found in the context
	W1121 13:57:32.549158       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 13:57:32.549172       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1121 13:57:32.549190       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1121 13:57:32.549229       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1121 13:57:32.550354       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1121 13:57:32.847176       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1121 13:58:00.862216       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51614: use of closed network connection
	E1121 13:58:00.997048       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51648: use of closed network connection
	I1121 13:58:11.982072       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1121 13:58:12.167660       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.238.248"}
	I1121 13:58:15.942344       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1121 14:00:33.515307       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.27.156"}
	
	
	==> kube-controller-manager [b49596a0b2d4dc21f077381ec50d10f903e56d22d3f6c4701ecced574385ab52] <==
	I1121 13:56:29.741011       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 13:56:29.741053       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 13:56:29.742785       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 13:56:29.743982       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1121 13:56:29.744027       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1121 13:56:29.744057       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1121 13:56:29.744066       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1121 13:56:29.744073       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1121 13:56:29.744086       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 13:56:29.748499       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 13:56:29.749950       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-243127" podCIDRs=["10.244.0.0/24"]
	I1121 13:56:29.753035       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 13:56:29.755186       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 13:56:29.755202       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 13:56:29.755209       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1121 13:56:59.748424       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1121 13:56:59.748554       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1121 13:56:59.748619       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1121 13:56:59.760372       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1121 13:56:59.765061       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1121 13:56:59.849012       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 13:56:59.866193       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 13:57:14.745391       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1121 13:57:29.854105       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1121 13:57:29.872778       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [7fb6fcbbcafef908292ed5964ecd5904153b8c7e75af509515148e2266c74ecc] <==
	I1121 13:56:31.039257       1 server_linux.go:53] "Using iptables proxy"
	I1121 13:56:31.100960       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 13:56:31.202087       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 13:56:31.202121       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1121 13:56:31.202228       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 13:56:31.219791       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 13:56:31.219836       1 server_linux.go:132] "Using iptables Proxier"
	I1121 13:56:31.225226       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 13:56:31.231141       1 server.go:527] "Version info" version="v1.34.1"
	I1121 13:56:31.231182       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 13:56:31.233076       1 config.go:200] "Starting service config controller"
	I1121 13:56:31.233101       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 13:56:31.237185       1 config.go:106] "Starting endpoint slice config controller"
	I1121 13:56:31.237203       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 13:56:31.237746       1 config.go:309] "Starting node config controller"
	I1121 13:56:31.238910       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 13:56:31.238929       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 13:56:31.238820       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 13:56:31.238955       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 13:56:31.333259       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 13:56:31.338853       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 13:56:31.345472       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [19610d1d8120b612dcda4fbab26734a764faf1bebe73cc951524320b474dea8b] <==
	E1121 13:56:22.757009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 13:56:22.757022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 13:56:22.757021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 13:56:22.757139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 13:56:22.757186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 13:56:22.757286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 13:56:22.757299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 13:56:22.757546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 13:56:22.757616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 13:56:22.757623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 13:56:22.757728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 13:56:22.757735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 13:56:22.757734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 13:56:22.757856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 13:56:22.757941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 13:56:23.605015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 13:56:23.757950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 13:56:23.791786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 13:56:23.840983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 13:56:23.843945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 13:56:23.844034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 13:56:23.901430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 13:56:23.920378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 13:56:23.956388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1121 13:56:24.255297       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 13:58:37 addons-243127 kubelet[1285]: I1121 13:58:37.209390    1285 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlxhw\" (UniqueName: \"kubernetes.io/projected/31ba2b24-9eb8-4903-85b4-12ac9f048ae8-kube-api-access-hlxhw\") pod \"31ba2b24-9eb8-4903-85b4-12ac9f048ae8\" (UID: \"31ba2b24-9eb8-4903-85b4-12ac9f048ae8\") "
	Nov 21 13:58:37 addons-243127 kubelet[1285]: I1121 13:58:37.209388    1285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ba2b24-9eb8-4903-85b4-12ac9f048ae8-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "31ba2b24-9eb8-4903-85b4-12ac9f048ae8" (UID: "31ba2b24-9eb8-4903-85b4-12ac9f048ae8"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 21 13:58:37 addons-243127 kubelet[1285]: I1121 13:58:37.211286    1285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ba2b24-9eb8-4903-85b4-12ac9f048ae8-kube-api-access-hlxhw" (OuterVolumeSpecName: "kube-api-access-hlxhw") pod "31ba2b24-9eb8-4903-85b4-12ac9f048ae8" (UID: "31ba2b24-9eb8-4903-85b4-12ac9f048ae8"). InnerVolumeSpecName "kube-api-access-hlxhw". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 21 13:58:37 addons-243127 kubelet[1285]: I1121 13:58:37.309899    1285 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^27e3cd62-c6e2-11f0-987d-26e6f1ce4705\") pod \"31ba2b24-9eb8-4903-85b4-12ac9f048ae8\" (UID: \"31ba2b24-9eb8-4903-85b4-12ac9f048ae8\") "
	Nov 21 13:58:37 addons-243127 kubelet[1285]: I1121 13:58:37.310055    1285 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hlxhw\" (UniqueName: \"kubernetes.io/projected/31ba2b24-9eb8-4903-85b4-12ac9f048ae8-kube-api-access-hlxhw\") on node \"addons-243127\" DevicePath \"\""
	Nov 21 13:58:37 addons-243127 kubelet[1285]: I1121 13:58:37.310077    1285 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/31ba2b24-9eb8-4903-85b4-12ac9f048ae8-gcp-creds\") on node \"addons-243127\" DevicePath \"\""
	Nov 21 13:58:37 addons-243127 kubelet[1285]: I1121 13:58:37.312687    1285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^27e3cd62-c6e2-11f0-987d-26e6f1ce4705" (OuterVolumeSpecName: "task-pv-storage") pod "31ba2b24-9eb8-4903-85b4-12ac9f048ae8" (UID: "31ba2b24-9eb8-4903-85b4-12ac9f048ae8"). InnerVolumeSpecName "pvc-f943ac9e-f9d1-4517-bb9e-a78308d2e872". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 21 13:58:37 addons-243127 kubelet[1285]: I1121 13:58:37.411074    1285 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-f943ac9e-f9d1-4517-bb9e-a78308d2e872\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^27e3cd62-c6e2-11f0-987d-26e6f1ce4705\") on node \"addons-243127\" "
	Nov 21 13:58:37 addons-243127 kubelet[1285]: I1121 13:58:37.415810    1285 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-f943ac9e-f9d1-4517-bb9e-a78308d2e872" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^27e3cd62-c6e2-11f0-987d-26e6f1ce4705") on node "addons-243127"
	Nov 21 13:58:37 addons-243127 kubelet[1285]: I1121 13:58:37.511964    1285 reconciler_common.go:299] "Volume detached for volume \"pvc-f943ac9e-f9d1-4517-bb9e-a78308d2e872\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^27e3cd62-c6e2-11f0-987d-26e6f1ce4705\") on node \"addons-243127\" DevicePath \"\""
	Nov 21 13:58:37 addons-243127 kubelet[1285]: I1121 13:58:37.641110    1285 scope.go:117] "RemoveContainer" containerID="261d360c44732f5b839417fe2b7871a832bec98668a910539250cfd475ca1172"
	Nov 21 13:58:37 addons-243127 kubelet[1285]: I1121 13:58:37.650655    1285 scope.go:117] "RemoveContainer" containerID="261d360c44732f5b839417fe2b7871a832bec98668a910539250cfd475ca1172"
	Nov 21 13:58:37 addons-243127 kubelet[1285]: E1121 13:58:37.651029    1285 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"261d360c44732f5b839417fe2b7871a832bec98668a910539250cfd475ca1172\": container with ID starting with 261d360c44732f5b839417fe2b7871a832bec98668a910539250cfd475ca1172 not found: ID does not exist" containerID="261d360c44732f5b839417fe2b7871a832bec98668a910539250cfd475ca1172"
	Nov 21 13:58:37 addons-243127 kubelet[1285]: I1121 13:58:37.651065    1285 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"261d360c44732f5b839417fe2b7871a832bec98668a910539250cfd475ca1172"} err="failed to get container status \"261d360c44732f5b839417fe2b7871a832bec98668a910539250cfd475ca1172\": rpc error: code = NotFound desc = could not find container \"261d360c44732f5b839417fe2b7871a832bec98668a910539250cfd475ca1172\": container with ID starting with 261d360c44732f5b839417fe2b7871a832bec98668a910539250cfd475ca1172 not found: ID does not exist"
	Nov 21 13:58:39 addons-243127 kubelet[1285]: I1121 13:58:39.109263    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31ba2b24-9eb8-4903-85b4-12ac9f048ae8" path="/var/lib/kubelet/pods/31ba2b24-9eb8-4903-85b4-12ac9f048ae8/volumes"
	Nov 21 13:58:47 addons-243127 kubelet[1285]: I1121 13:58:47.107713    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-9k9gw" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 13:59:02 addons-243127 kubelet[1285]: I1121 13:59:02.106586    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-rs4wk" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 13:59:05 addons-243127 kubelet[1285]: I1121 13:59:05.107480    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-v2h2s" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 13:59:25 addons-243127 kubelet[1285]: I1121 13:59:25.161974    1285 scope.go:117] "RemoveContainer" containerID="9ffa680ddafccc7b7a22e27b5ff54e677fb3a1f614805593fb8e126a6c0ca6dd"
	Nov 21 14:00:01 addons-243127 kubelet[1285]: I1121 14:00:01.107063    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-9k9gw" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 14:00:12 addons-243127 kubelet[1285]: I1121 14:00:12.106938    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-rs4wk" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 14:00:21 addons-243127 kubelet[1285]: I1121 14:00:21.107192    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-v2h2s" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 14:00:33 addons-243127 kubelet[1285]: I1121 14:00:33.596480    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhmtk\" (UniqueName: \"kubernetes.io/projected/227b38f2-0ee1-43c8-a92d-03dab9d66c15-kube-api-access-bhmtk\") pod \"hello-world-app-5d498dc89-44t27\" (UID: \"227b38f2-0ee1-43c8-a92d-03dab9d66c15\") " pod="default/hello-world-app-5d498dc89-44t27"
	Nov 21 14:00:33 addons-243127 kubelet[1285]: I1121 14:00:33.596651    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/227b38f2-0ee1-43c8-a92d-03dab9d66c15-gcp-creds\") pod \"hello-world-app-5d498dc89-44t27\" (UID: \"227b38f2-0ee1-43c8-a92d-03dab9d66c15\") " pod="default/hello-world-app-5d498dc89-44t27"
	Nov 21 14:00:35 addons-243127 kubelet[1285]: I1121 14:00:35.062532    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-44t27" podStartSLOduration=1.244756738 podStartE2EDuration="2.062512768s" podCreationTimestamp="2025-11-21 14:00:33 +0000 UTC" firstStartedPulling="2025-11-21 14:00:33.771697188 +0000 UTC m=+248.742219423" lastFinishedPulling="2025-11-21 14:00:34.589453221 +0000 UTC m=+249.559975453" observedRunningTime="2025-11-21 14:00:35.061903609 +0000 UTC m=+250.032425859" watchObservedRunningTime="2025-11-21 14:00:35.062512768 +0000 UTC m=+250.033035024"
	
	
	==> storage-provisioner [7d8b7a3c495d7923335171c6b39b7aaad4571152b404188c58e3055e39def27a] <==
	W1121 14:00:11.079051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:13.081554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:13.084948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:15.088263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:15.091754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:17.094224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:17.098510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:19.101452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:19.104739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:21.107083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:21.111646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:23.114492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:23.118837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:25.121924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:25.126826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:27.130505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:27.134078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:29.136660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:29.139679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:31.141996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:31.146297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:33.148923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:33.152307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:35.154851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:00:35.158458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-243127 -n addons-243127
helpers_test.go:269: (dbg) Run:  kubectl --context addons-243127 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-l9vg2 ingress-nginx-admission-patch-skmg2
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-243127 describe pod ingress-nginx-admission-create-l9vg2 ingress-nginx-admission-patch-skmg2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-243127 describe pod ingress-nginx-admission-create-l9vg2 ingress-nginx-admission-patch-skmg2: exit status 1 (52.368846ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-l9vg2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-skmg2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-243127 describe pod ingress-nginx-admission-create-l9vg2 ingress-nginx-admission-patch-skmg2: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-243127 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-243127 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (229.793831ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:00:35.791317   30326 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:00:35.791638   30326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:00:35.791648   30326 out.go:374] Setting ErrFile to fd 2...
	I1121 14:00:35.791652   30326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:00:35.791869   30326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:00:35.792155   30326 mustload.go:66] Loading cluster: addons-243127
	I1121 14:00:35.792511   30326 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:00:35.792528   30326 addons.go:622] checking whether the cluster is paused
	I1121 14:00:35.792656   30326 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:00:35.792674   30326 host.go:66] Checking if "addons-243127" exists ...
	I1121 14:00:35.793052   30326 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 14:00:35.810977   30326 ssh_runner.go:195] Run: systemctl --version
	I1121 14:00:35.811027   30326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 14:00:35.826250   30326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 14:00:35.918474   30326 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:00:35.918529   30326 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:00:35.945620   30326 cri.go:89] found id: "0b9bbd56ef6c76ed87506b65b92848c527e40f5c539ec9f68e55961be7ec43c9"
	I1121 14:00:35.945638   30326 cri.go:89] found id: "024fd155c71f400a95e31fc7ad96222849e4a688d5188b8418494fff998b02f8"
	I1121 14:00:35.945642   30326 cri.go:89] found id: "a45184ba10995c917c66320d64b28434d66ae84a7641d0c7c7c9435196c72b05"
	I1121 14:00:35.945645   30326 cri.go:89] found id: "c7cea569e79d903c00cfe8fa08fc613df7758703b3a3365f91c8a868e223391a"
	I1121 14:00:35.945649   30326 cri.go:89] found id: "f87ee9ca1eb0d9c8526d317e4709114848a958759fe6996309f72e558fcc76bd"
	I1121 14:00:35.945654   30326 cri.go:89] found id: "b69dec080641a7059fc510bee4a22d19e85c4143780d1fee7044f2fdb740f740"
	I1121 14:00:35.945659   30326 cri.go:89] found id: "733ab2d4f270d078eb3b7fb75ffde7d300a333a9eb60360fe6d7d27fb0875dd7"
	I1121 14:00:35.945663   30326 cri.go:89] found id: "6d89596515e60da3c6350bc1ead48b9dbbf7532b2b24203a2cac8a9359f130a5"
	I1121 14:00:35.945668   30326 cri.go:89] found id: "12a7ad65b01f342d9f1789252a52da86937f6e873020a7041cff37d3b28aaf6f"
	I1121 14:00:35.945689   30326 cri.go:89] found id: "3179500ac77197cbb987c594d2e651bf8097474fdf25f2ce1512534d31d41788"
	I1121 14:00:35.945698   30326 cri.go:89] found id: "994774c9ca4f6f8e3514b947f0b8fda8fa47d7542203f328719915376b9619b8"
	I1121 14:00:35.945702   30326 cri.go:89] found id: "32c8ebd6346418c59c61269d61198ed0c8fdb4a99d46cc2ba298869b17e82675"
	I1121 14:00:35.945707   30326 cri.go:89] found id: "56057aee31072304afba5ad58c29a181e243ff2e0f856de3cc6a72a06aa40534"
	I1121 14:00:35.945711   30326 cri.go:89] found id: "77eb40d30250c88e9becb416f1d606fd898acc0a86c49c8005d72a9268c0d3f1"
	I1121 14:00:35.945718   30326 cri.go:89] found id: "7d1e97c795b004c26d0da895539dc886fe57268b3dac72ee7d7de356e86f6014"
	I1121 14:00:35.945724   30326 cri.go:89] found id: "b3df341d90d52b5ef2ee3a00f8e67c97d074f486b504b70f4bd9ca36e586af13"
	I1121 14:00:35.945730   30326 cri.go:89] found id: "a862b6c84241dc48d722f3ee0bd89241e61135843ff33148c7f534cbf5f5680c"
	I1121 14:00:35.945739   30326 cri.go:89] found id: "7d8b7a3c495d7923335171c6b39b7aaad4571152b404188c58e3055e39def27a"
	I1121 14:00:35.945744   30326 cri.go:89] found id: "8a5df4965546d743393a4b617fc40ffb5e887a5847cf860fc0a2bf8ca9d53262"
	I1121 14:00:35.945776   30326 cri.go:89] found id: "7fb6fcbbcafef908292ed5964ecd5904153b8c7e75af509515148e2266c74ecc"
	I1121 14:00:35.945784   30326 cri.go:89] found id: "66f416a2611818263d3d82ca7e1c1dc51f8f8373c1ef7e282ba527f4ea204081"
	I1121 14:00:35.945792   30326 cri.go:89] found id: "b49596a0b2d4dc21f077381ec50d10f903e56d22d3f6c4701ecced574385ab52"
	I1121 14:00:35.945796   30326 cri.go:89] found id: "6bc0a23d21b59305e5afddf76ffb436c468bd915a8bb02d9d71592825dafd624"
	I1121 14:00:35.945803   30326 cri.go:89] found id: "61ca322c069424488425150fa2cb0785f8dae51cb585322a28bd81a85d0796f1"
	I1121 14:00:35.945808   30326 cri.go:89] found id: "19610d1d8120b612dcda4fbab26734a764faf1bebe73cc951524320b474dea8b"
	I1121 14:00:35.945826   30326 cri.go:89] found id: ""
	I1121 14:00:35.945869   30326 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:00:35.959268   30326 out.go:203] 
	W1121 14:00:35.960613   30326 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:00:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:00:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 14:00:35.960629   30326 out.go:285] * 
	* 
	W1121 14:00:35.963623   30326 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 14:00:35.964871   30326 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-243127 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-243127 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-243127 addons disable ingress --alsologtostderr -v=1: exit status 11 (228.456455ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:00:36.022129   30387 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:00:36.022438   30387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:00:36.022449   30387 out.go:374] Setting ErrFile to fd 2...
	I1121 14:00:36.022456   30387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:00:36.022675   30387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:00:36.022945   30387 mustload.go:66] Loading cluster: addons-243127
	I1121 14:00:36.023263   30387 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:00:36.023280   30387 addons.go:622] checking whether the cluster is paused
	I1121 14:00:36.023393   30387 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:00:36.023410   30387 host.go:66] Checking if "addons-243127" exists ...
	I1121 14:00:36.023821   30387 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 14:00:36.041026   30387 ssh_runner.go:195] Run: systemctl --version
	I1121 14:00:36.041077   30387 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 14:00:36.058387   30387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 14:00:36.149479   30387 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:00:36.149581   30387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:00:36.176365   30387 cri.go:89] found id: "0b9bbd56ef6c76ed87506b65b92848c527e40f5c539ec9f68e55961be7ec43c9"
	I1121 14:00:36.176385   30387 cri.go:89] found id: "024fd155c71f400a95e31fc7ad96222849e4a688d5188b8418494fff998b02f8"
	I1121 14:00:36.176389   30387 cri.go:89] found id: "a45184ba10995c917c66320d64b28434d66ae84a7641d0c7c7c9435196c72b05"
	I1121 14:00:36.176392   30387 cri.go:89] found id: "c7cea569e79d903c00cfe8fa08fc613df7758703b3a3365f91c8a868e223391a"
	I1121 14:00:36.176394   30387 cri.go:89] found id: "f87ee9ca1eb0d9c8526d317e4709114848a958759fe6996309f72e558fcc76bd"
	I1121 14:00:36.176397   30387 cri.go:89] found id: "b69dec080641a7059fc510bee4a22d19e85c4143780d1fee7044f2fdb740f740"
	I1121 14:00:36.176400   30387 cri.go:89] found id: "733ab2d4f270d078eb3b7fb75ffde7d300a333a9eb60360fe6d7d27fb0875dd7"
	I1121 14:00:36.176402   30387 cri.go:89] found id: "6d89596515e60da3c6350bc1ead48b9dbbf7532b2b24203a2cac8a9359f130a5"
	I1121 14:00:36.176404   30387 cri.go:89] found id: "12a7ad65b01f342d9f1789252a52da86937f6e873020a7041cff37d3b28aaf6f"
	I1121 14:00:36.176411   30387 cri.go:89] found id: "3179500ac77197cbb987c594d2e651bf8097474fdf25f2ce1512534d31d41788"
	I1121 14:00:36.176414   30387 cri.go:89] found id: "994774c9ca4f6f8e3514b947f0b8fda8fa47d7542203f328719915376b9619b8"
	I1121 14:00:36.176416   30387 cri.go:89] found id: "32c8ebd6346418c59c61269d61198ed0c8fdb4a99d46cc2ba298869b17e82675"
	I1121 14:00:36.176419   30387 cri.go:89] found id: "56057aee31072304afba5ad58c29a181e243ff2e0f856de3cc6a72a06aa40534"
	I1121 14:00:36.176421   30387 cri.go:89] found id: "77eb40d30250c88e9becb416f1d606fd898acc0a86c49c8005d72a9268c0d3f1"
	I1121 14:00:36.176424   30387 cri.go:89] found id: "7d1e97c795b004c26d0da895539dc886fe57268b3dac72ee7d7de356e86f6014"
	I1121 14:00:36.176434   30387 cri.go:89] found id: "b3df341d90d52b5ef2ee3a00f8e67c97d074f486b504b70f4bd9ca36e586af13"
	I1121 14:00:36.176441   30387 cri.go:89] found id: "a862b6c84241dc48d722f3ee0bd89241e61135843ff33148c7f534cbf5f5680c"
	I1121 14:00:36.176445   30387 cri.go:89] found id: "7d8b7a3c495d7923335171c6b39b7aaad4571152b404188c58e3055e39def27a"
	I1121 14:00:36.176448   30387 cri.go:89] found id: "8a5df4965546d743393a4b617fc40ffb5e887a5847cf860fc0a2bf8ca9d53262"
	I1121 14:00:36.176451   30387 cri.go:89] found id: "7fb6fcbbcafef908292ed5964ecd5904153b8c7e75af509515148e2266c74ecc"
	I1121 14:00:36.176453   30387 cri.go:89] found id: "66f416a2611818263d3d82ca7e1c1dc51f8f8373c1ef7e282ba527f4ea204081"
	I1121 14:00:36.176456   30387 cri.go:89] found id: "b49596a0b2d4dc21f077381ec50d10f903e56d22d3f6c4701ecced574385ab52"
	I1121 14:00:36.176458   30387 cri.go:89] found id: "6bc0a23d21b59305e5afddf76ffb436c468bd915a8bb02d9d71592825dafd624"
	I1121 14:00:36.176460   30387 cri.go:89] found id: "61ca322c069424488425150fa2cb0785f8dae51cb585322a28bd81a85d0796f1"
	I1121 14:00:36.176463   30387 cri.go:89] found id: "19610d1d8120b612dcda4fbab26734a764faf1bebe73cc951524320b474dea8b"
	I1121 14:00:36.176465   30387 cri.go:89] found id: ""
	I1121 14:00:36.176502   30387 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:00:36.189366   30387 out.go:203] 
	W1121 14:00:36.190465   30387 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:00:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:00:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 14:00:36.190483   30387 out.go:285] * 
	* 
	W1121 14:00:36.193430   30387 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 14:00:36.194670   30387 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-243127 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.47s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-hm6p2" [c37404c8-bd76-4682-9971-9a9548f87ce8] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003168553s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-243127 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-243127 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (246.820319ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:58:11.543285   25518 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:58:11.543734   25518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:11.543754   25518 out.go:374] Setting ErrFile to fd 2...
	I1121 13:58:11.543761   25518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:11.544219   25518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 13:58:11.544817   25518 mustload.go:66] Loading cluster: addons-243127
	I1121 13:58:11.545219   25518 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:11.545237   25518 addons.go:622] checking whether the cluster is paused
	I1121 13:58:11.545319   25518 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:11.545337   25518 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:58:11.545725   25518 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:58:11.565936   25518 ssh_runner.go:195] Run: systemctl --version
	I1121 13:58:11.565985   25518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:58:11.583740   25518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:58:11.679546   25518 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:58:11.679641   25518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:58:11.706674   25518 cri.go:89] found id: "024fd155c71f400a95e31fc7ad96222849e4a688d5188b8418494fff998b02f8"
	I1121 13:58:11.706698   25518 cri.go:89] found id: "a45184ba10995c917c66320d64b28434d66ae84a7641d0c7c7c9435196c72b05"
	I1121 13:58:11.706704   25518 cri.go:89] found id: "c7cea569e79d903c00cfe8fa08fc613df7758703b3a3365f91c8a868e223391a"
	I1121 13:58:11.706709   25518 cri.go:89] found id: "f87ee9ca1eb0d9c8526d317e4709114848a958759fe6996309f72e558fcc76bd"
	I1121 13:58:11.706713   25518 cri.go:89] found id: "b69dec080641a7059fc510bee4a22d19e85c4143780d1fee7044f2fdb740f740"
	I1121 13:58:11.706717   25518 cri.go:89] found id: "733ab2d4f270d078eb3b7fb75ffde7d300a333a9eb60360fe6d7d27fb0875dd7"
	I1121 13:58:11.706720   25518 cri.go:89] found id: "6d89596515e60da3c6350bc1ead48b9dbbf7532b2b24203a2cac8a9359f130a5"
	I1121 13:58:11.706723   25518 cri.go:89] found id: "12a7ad65b01f342d9f1789252a52da86937f6e873020a7041cff37d3b28aaf6f"
	I1121 13:58:11.706725   25518 cri.go:89] found id: "3179500ac77197cbb987c594d2e651bf8097474fdf25f2ce1512534d31d41788"
	I1121 13:58:11.706729   25518 cri.go:89] found id: "994774c9ca4f6f8e3514b947f0b8fda8fa47d7542203f328719915376b9619b8"
	I1121 13:58:11.706732   25518 cri.go:89] found id: "32c8ebd6346418c59c61269d61198ed0c8fdb4a99d46cc2ba298869b17e82675"
	I1121 13:58:11.706735   25518 cri.go:89] found id: "56057aee31072304afba5ad58c29a181e243ff2e0f856de3cc6a72a06aa40534"
	I1121 13:58:11.706738   25518 cri.go:89] found id: "77eb40d30250c88e9becb416f1d606fd898acc0a86c49c8005d72a9268c0d3f1"
	I1121 13:58:11.706742   25518 cri.go:89] found id: "7d1e97c795b004c26d0da895539dc886fe57268b3dac72ee7d7de356e86f6014"
	I1121 13:58:11.706746   25518 cri.go:89] found id: "b3df341d90d52b5ef2ee3a00f8e67c97d074f486b504b70f4bd9ca36e586af13"
	I1121 13:58:11.706757   25518 cri.go:89] found id: "a862b6c84241dc48d722f3ee0bd89241e61135843ff33148c7f534cbf5f5680c"
	I1121 13:58:11.706765   25518 cri.go:89] found id: "7d8b7a3c495d7923335171c6b39b7aaad4571152b404188c58e3055e39def27a"
	I1121 13:58:11.706772   25518 cri.go:89] found id: "8a5df4965546d743393a4b617fc40ffb5e887a5847cf860fc0a2bf8ca9d53262"
	I1121 13:58:11.706776   25518 cri.go:89] found id: "7fb6fcbbcafef908292ed5964ecd5904153b8c7e75af509515148e2266c74ecc"
	I1121 13:58:11.706780   25518 cri.go:89] found id: "66f416a2611818263d3d82ca7e1c1dc51f8f8373c1ef7e282ba527f4ea204081"
	I1121 13:58:11.706787   25518 cri.go:89] found id: "b49596a0b2d4dc21f077381ec50d10f903e56d22d3f6c4701ecced574385ab52"
	I1121 13:58:11.706792   25518 cri.go:89] found id: "6bc0a23d21b59305e5afddf76ffb436c468bd915a8bb02d9d71592825dafd624"
	I1121 13:58:11.706796   25518 cri.go:89] found id: "61ca322c069424488425150fa2cb0785f8dae51cb585322a28bd81a85d0796f1"
	I1121 13:58:11.706803   25518 cri.go:89] found id: "19610d1d8120b612dcda4fbab26734a764faf1bebe73cc951524320b474dea8b"
	I1121 13:58:11.706807   25518 cri.go:89] found id: ""
	I1121 13:58:11.706850   25518 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:58:11.720609   25518 out.go:203] 
	W1121 13:58:11.721655   25518 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:58:11.721678   25518 out.go:285] * 
	* 
	W1121 13:58:11.726731   25518 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:58:11.727944   25518 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-243127 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.225982ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-4khd6" [9d42569c-8cf8-439e-858c-1acf1f059214] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002454334s
addons_test.go:463: (dbg) Run:  kubectl --context addons-243127 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-243127 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-243127 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (252.301586ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:58:11.603910   25541 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:58:11.604048   25541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:11.604058   25541 out.go:374] Setting ErrFile to fd 2...
	I1121 13:58:11.604062   25541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:11.604279   25541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 13:58:11.604584   25541 mustload.go:66] Loading cluster: addons-243127
	I1121 13:58:11.604972   25541 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:11.604990   25541 addons.go:622] checking whether the cluster is paused
	I1121 13:58:11.605082   25541 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:11.605097   25541 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:58:11.605443   25541 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:58:11.622159   25541 ssh_runner.go:195] Run: systemctl --version
	I1121 13:58:11.622193   25541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:58:11.638738   25541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:58:11.735846   25541 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:58:11.735919   25541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:58:11.770485   25541 cri.go:89] found id: "024fd155c71f400a95e31fc7ad96222849e4a688d5188b8418494fff998b02f8"
	I1121 13:58:11.770526   25541 cri.go:89] found id: "a45184ba10995c917c66320d64b28434d66ae84a7641d0c7c7c9435196c72b05"
	I1121 13:58:11.770532   25541 cri.go:89] found id: "c7cea569e79d903c00cfe8fa08fc613df7758703b3a3365f91c8a868e223391a"
	I1121 13:58:11.770537   25541 cri.go:89] found id: "f87ee9ca1eb0d9c8526d317e4709114848a958759fe6996309f72e558fcc76bd"
	I1121 13:58:11.770542   25541 cri.go:89] found id: "b69dec080641a7059fc510bee4a22d19e85c4143780d1fee7044f2fdb740f740"
	I1121 13:58:11.770546   25541 cri.go:89] found id: "733ab2d4f270d078eb3b7fb75ffde7d300a333a9eb60360fe6d7d27fb0875dd7"
	I1121 13:58:11.770551   25541 cri.go:89] found id: "6d89596515e60da3c6350bc1ead48b9dbbf7532b2b24203a2cac8a9359f130a5"
	I1121 13:58:11.770555   25541 cri.go:89] found id: "12a7ad65b01f342d9f1789252a52da86937f6e873020a7041cff37d3b28aaf6f"
	I1121 13:58:11.770582   25541 cri.go:89] found id: "3179500ac77197cbb987c594d2e651bf8097474fdf25f2ce1512534d31d41788"
	I1121 13:58:11.770593   25541 cri.go:89] found id: "994774c9ca4f6f8e3514b947f0b8fda8fa47d7542203f328719915376b9619b8"
	I1121 13:58:11.770599   25541 cri.go:89] found id: "32c8ebd6346418c59c61269d61198ed0c8fdb4a99d46cc2ba298869b17e82675"
	I1121 13:58:11.770603   25541 cri.go:89] found id: "56057aee31072304afba5ad58c29a181e243ff2e0f856de3cc6a72a06aa40534"
	I1121 13:58:11.770608   25541 cri.go:89] found id: "77eb40d30250c88e9becb416f1d606fd898acc0a86c49c8005d72a9268c0d3f1"
	I1121 13:58:11.770612   25541 cri.go:89] found id: "7d1e97c795b004c26d0da895539dc886fe57268b3dac72ee7d7de356e86f6014"
	I1121 13:58:11.770616   25541 cri.go:89] found id: "b3df341d90d52b5ef2ee3a00f8e67c97d074f486b504b70f4bd9ca36e586af13"
	I1121 13:58:11.770634   25541 cri.go:89] found id: "a862b6c84241dc48d722f3ee0bd89241e61135843ff33148c7f534cbf5f5680c"
	I1121 13:58:11.770640   25541 cri.go:89] found id: "7d8b7a3c495d7923335171c6b39b7aaad4571152b404188c58e3055e39def27a"
	I1121 13:58:11.770661   25541 cri.go:89] found id: "8a5df4965546d743393a4b617fc40ffb5e887a5847cf860fc0a2bf8ca9d53262"
	I1121 13:58:11.770666   25541 cri.go:89] found id: "7fb6fcbbcafef908292ed5964ecd5904153b8c7e75af509515148e2266c74ecc"
	I1121 13:58:11.770670   25541 cri.go:89] found id: "66f416a2611818263d3d82ca7e1c1dc51f8f8373c1ef7e282ba527f4ea204081"
	I1121 13:58:11.770677   25541 cri.go:89] found id: "b49596a0b2d4dc21f077381ec50d10f903e56d22d3f6c4701ecced574385ab52"
	I1121 13:58:11.770681   25541 cri.go:89] found id: "6bc0a23d21b59305e5afddf76ffb436c468bd915a8bb02d9d71592825dafd624"
	I1121 13:58:11.770685   25541 cri.go:89] found id: "61ca322c069424488425150fa2cb0785f8dae51cb585322a28bd81a85d0796f1"
	I1121 13:58:11.770690   25541 cri.go:89] found id: "19610d1d8120b612dcda4fbab26734a764faf1bebe73cc951524320b474dea8b"
	I1121 13:58:11.770693   25541 cri.go:89] found id: ""
	I1121 13:58:11.770738   25541 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:58:11.787132   25541 out.go:203] 
	W1121 13:58:11.788134   25541 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:58:11.788156   25541 out.go:285] * 
	* 
	W1121 13:58:11.792942   25541 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:58:11.793964   25541 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-243127 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (34.84s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1121 13:58:03.594291   14542 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1121 13:58:03.597274   14542 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1121 13:58:03.597293   14542 kapi.go:107] duration metric: took 3.024925ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.034108ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-243127 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-243127 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [b131fbcd-6556-4a3d-aac1-f55e2814944e] Pending
helpers_test.go:352: "task-pv-pod" [b131fbcd-6556-4a3d-aac1-f55e2814944e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [b131fbcd-6556-4a3d-aac1-f55e2814944e] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003637024s
addons_test.go:572: (dbg) Run:  kubectl --context addons-243127 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-243127 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-243127 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-243127 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-243127 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-243127 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-243127 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [31ba2b24-9eb8-4903-85b4-12ac9f048ae8] Pending
helpers_test.go:352: "task-pv-pod-restore" [31ba2b24-9eb8-4903-85b4-12ac9f048ae8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [31ba2b24-9eb8-4903-85b4-12ac9f048ae8] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003054914s
addons_test.go:614: (dbg) Run:  kubectl --context addons-243127 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-243127 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-243127 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-243127 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-243127 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (225.84657ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:58:38.022337   28102 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:58:38.022642   28102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:38.022651   28102 out.go:374] Setting ErrFile to fd 2...
	I1121 13:58:38.022656   28102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:38.022837   28102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 13:58:38.023079   28102 mustload.go:66] Loading cluster: addons-243127
	I1121 13:58:38.023432   28102 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:38.023448   28102 addons.go:622] checking whether the cluster is paused
	I1121 13:58:38.023532   28102 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:38.023544   28102 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:58:38.023898   28102 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:58:38.040442   28102 ssh_runner.go:195] Run: systemctl --version
	I1121 13:58:38.040480   28102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:58:38.056447   28102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:58:38.148599   28102 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:58:38.148694   28102 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:58:38.175746   28102 cri.go:89] found id: "0b9bbd56ef6c76ed87506b65b92848c527e40f5c539ec9f68e55961be7ec43c9"
	I1121 13:58:38.175764   28102 cri.go:89] found id: "024fd155c71f400a95e31fc7ad96222849e4a688d5188b8418494fff998b02f8"
	I1121 13:58:38.175770   28102 cri.go:89] found id: "a45184ba10995c917c66320d64b28434d66ae84a7641d0c7c7c9435196c72b05"
	I1121 13:58:38.175775   28102 cri.go:89] found id: "c7cea569e79d903c00cfe8fa08fc613df7758703b3a3365f91c8a868e223391a"
	I1121 13:58:38.175779   28102 cri.go:89] found id: "f87ee9ca1eb0d9c8526d317e4709114848a958759fe6996309f72e558fcc76bd"
	I1121 13:58:38.175783   28102 cri.go:89] found id: "b69dec080641a7059fc510bee4a22d19e85c4143780d1fee7044f2fdb740f740"
	I1121 13:58:38.175787   28102 cri.go:89] found id: "733ab2d4f270d078eb3b7fb75ffde7d300a333a9eb60360fe6d7d27fb0875dd7"
	I1121 13:58:38.175791   28102 cri.go:89] found id: "6d89596515e60da3c6350bc1ead48b9dbbf7532b2b24203a2cac8a9359f130a5"
	I1121 13:58:38.175795   28102 cri.go:89] found id: "12a7ad65b01f342d9f1789252a52da86937f6e873020a7041cff37d3b28aaf6f"
	I1121 13:58:38.175805   28102 cri.go:89] found id: "3179500ac77197cbb987c594d2e651bf8097474fdf25f2ce1512534d31d41788"
	I1121 13:58:38.175811   28102 cri.go:89] found id: "994774c9ca4f6f8e3514b947f0b8fda8fa47d7542203f328719915376b9619b8"
	I1121 13:58:38.175816   28102 cri.go:89] found id: "32c8ebd6346418c59c61269d61198ed0c8fdb4a99d46cc2ba298869b17e82675"
	I1121 13:58:38.175820   28102 cri.go:89] found id: "56057aee31072304afba5ad58c29a181e243ff2e0f856de3cc6a72a06aa40534"
	I1121 13:58:38.175826   28102 cri.go:89] found id: "77eb40d30250c88e9becb416f1d606fd898acc0a86c49c8005d72a9268c0d3f1"
	I1121 13:58:38.175830   28102 cri.go:89] found id: "7d1e97c795b004c26d0da895539dc886fe57268b3dac72ee7d7de356e86f6014"
	I1121 13:58:38.175846   28102 cri.go:89] found id: "b3df341d90d52b5ef2ee3a00f8e67c97d074f486b504b70f4bd9ca36e586af13"
	I1121 13:58:38.175856   28102 cri.go:89] found id: "a862b6c84241dc48d722f3ee0bd89241e61135843ff33148c7f534cbf5f5680c"
	I1121 13:58:38.175862   28102 cri.go:89] found id: "7d8b7a3c495d7923335171c6b39b7aaad4571152b404188c58e3055e39def27a"
	I1121 13:58:38.175866   28102 cri.go:89] found id: "8a5df4965546d743393a4b617fc40ffb5e887a5847cf860fc0a2bf8ca9d53262"
	I1121 13:58:38.175870   28102 cri.go:89] found id: "7fb6fcbbcafef908292ed5964ecd5904153b8c7e75af509515148e2266c74ecc"
	I1121 13:58:38.175874   28102 cri.go:89] found id: "66f416a2611818263d3d82ca7e1c1dc51f8f8373c1ef7e282ba527f4ea204081"
	I1121 13:58:38.175879   28102 cri.go:89] found id: "b49596a0b2d4dc21f077381ec50d10f903e56d22d3f6c4701ecced574385ab52"
	I1121 13:58:38.175884   28102 cri.go:89] found id: "6bc0a23d21b59305e5afddf76ffb436c468bd915a8bb02d9d71592825dafd624"
	I1121 13:58:38.175889   28102 cri.go:89] found id: "61ca322c069424488425150fa2cb0785f8dae51cb585322a28bd81a85d0796f1"
	I1121 13:58:38.175898   28102 cri.go:89] found id: "19610d1d8120b612dcda4fbab26734a764faf1bebe73cc951524320b474dea8b"
	I1121 13:58:38.175902   28102 cri.go:89] found id: ""
	I1121 13:58:38.175943   28102 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:58:38.188898   28102 out.go:203] 
	W1121 13:58:38.189857   28102 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:58:38.189871   28102 out.go:285] * 
	* 
	W1121 13:58:38.192822   28102 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:58:38.194006   28102 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-243127 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-243127 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-243127 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (229.555093ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:58:38.249407   28166 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:58:38.249687   28166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:38.249697   28166 out.go:374] Setting ErrFile to fd 2...
	I1121 13:58:38.249702   28166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:38.249862   28166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 13:58:38.250092   28166 mustload.go:66] Loading cluster: addons-243127
	I1121 13:58:38.250409   28166 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:38.250424   28166 addons.go:622] checking whether the cluster is paused
	I1121 13:58:38.250523   28166 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:38.250537   28166 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:58:38.250899   28166 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:58:38.267724   28166 ssh_runner.go:195] Run: systemctl --version
	I1121 13:58:38.267769   28166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:58:38.284775   28166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:58:38.376402   28166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:58:38.376461   28166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:58:38.405421   28166 cri.go:89] found id: "0b9bbd56ef6c76ed87506b65b92848c527e40f5c539ec9f68e55961be7ec43c9"
	I1121 13:58:38.405438   28166 cri.go:89] found id: "024fd155c71f400a95e31fc7ad96222849e4a688d5188b8418494fff998b02f8"
	I1121 13:58:38.405442   28166 cri.go:89] found id: "a45184ba10995c917c66320d64b28434d66ae84a7641d0c7c7c9435196c72b05"
	I1121 13:58:38.405445   28166 cri.go:89] found id: "c7cea569e79d903c00cfe8fa08fc613df7758703b3a3365f91c8a868e223391a"
	I1121 13:58:38.405448   28166 cri.go:89] found id: "f87ee9ca1eb0d9c8526d317e4709114848a958759fe6996309f72e558fcc76bd"
	I1121 13:58:38.405452   28166 cri.go:89] found id: "b69dec080641a7059fc510bee4a22d19e85c4143780d1fee7044f2fdb740f740"
	I1121 13:58:38.405454   28166 cri.go:89] found id: "733ab2d4f270d078eb3b7fb75ffde7d300a333a9eb60360fe6d7d27fb0875dd7"
	I1121 13:58:38.405457   28166 cri.go:89] found id: "6d89596515e60da3c6350bc1ead48b9dbbf7532b2b24203a2cac8a9359f130a5"
	I1121 13:58:38.405460   28166 cri.go:89] found id: "12a7ad65b01f342d9f1789252a52da86937f6e873020a7041cff37d3b28aaf6f"
	I1121 13:58:38.405465   28166 cri.go:89] found id: "3179500ac77197cbb987c594d2e651bf8097474fdf25f2ce1512534d31d41788"
	I1121 13:58:38.405470   28166 cri.go:89] found id: "994774c9ca4f6f8e3514b947f0b8fda8fa47d7542203f328719915376b9619b8"
	I1121 13:58:38.405475   28166 cri.go:89] found id: "32c8ebd6346418c59c61269d61198ed0c8fdb4a99d46cc2ba298869b17e82675"
	I1121 13:58:38.405479   28166 cri.go:89] found id: "56057aee31072304afba5ad58c29a181e243ff2e0f856de3cc6a72a06aa40534"
	I1121 13:58:38.405483   28166 cri.go:89] found id: "77eb40d30250c88e9becb416f1d606fd898acc0a86c49c8005d72a9268c0d3f1"
	I1121 13:58:38.405491   28166 cri.go:89] found id: "7d1e97c795b004c26d0da895539dc886fe57268b3dac72ee7d7de356e86f6014"
	I1121 13:58:38.405503   28166 cri.go:89] found id: "b3df341d90d52b5ef2ee3a00f8e67c97d074f486b504b70f4bd9ca36e586af13"
	I1121 13:58:38.405511   28166 cri.go:89] found id: "a862b6c84241dc48d722f3ee0bd89241e61135843ff33148c7f534cbf5f5680c"
	I1121 13:58:38.405517   28166 cri.go:89] found id: "7d8b7a3c495d7923335171c6b39b7aaad4571152b404188c58e3055e39def27a"
	I1121 13:58:38.405521   28166 cri.go:89] found id: "8a5df4965546d743393a4b617fc40ffb5e887a5847cf860fc0a2bf8ca9d53262"
	I1121 13:58:38.405526   28166 cri.go:89] found id: "7fb6fcbbcafef908292ed5964ecd5904153b8c7e75af509515148e2266c74ecc"
	I1121 13:58:38.405530   28166 cri.go:89] found id: "66f416a2611818263d3d82ca7e1c1dc51f8f8373c1ef7e282ba527f4ea204081"
	I1121 13:58:38.405534   28166 cri.go:89] found id: "b49596a0b2d4dc21f077381ec50d10f903e56d22d3f6c4701ecced574385ab52"
	I1121 13:58:38.405538   28166 cri.go:89] found id: "6bc0a23d21b59305e5afddf76ffb436c468bd915a8bb02d9d71592825dafd624"
	I1121 13:58:38.405548   28166 cri.go:89] found id: "61ca322c069424488425150fa2cb0785f8dae51cb585322a28bd81a85d0796f1"
	I1121 13:58:38.405551   28166 cri.go:89] found id: "19610d1d8120b612dcda4fbab26734a764faf1bebe73cc951524320b474dea8b"
	I1121 13:58:38.405554   28166 cri.go:89] found id: ""
	I1121 13:58:38.405612   28166 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:58:38.418710   28166 out.go:203] 
	W1121 13:58:38.419664   28166 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:58:38.419677   28166 out.go:285] * 
	* 
	W1121 13:58:38.422624   28166 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:58:38.423748   28166 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-243127 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (34.84s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-243127 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-243127 --alsologtostderr -v=1: exit status 11 (233.857302ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:58:01.283933   24248 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:58:01.284206   24248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:01.284215   24248 out.go:374] Setting ErrFile to fd 2...
	I1121 13:58:01.284219   24248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:01.284437   24248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 13:58:01.284717   24248 mustload.go:66] Loading cluster: addons-243127
	I1121 13:58:01.285007   24248 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:01.285021   24248 addons.go:622] checking whether the cluster is paused
	I1121 13:58:01.285102   24248 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:01.285112   24248 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:58:01.285435   24248 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:58:01.302649   24248 ssh_runner.go:195] Run: systemctl --version
	I1121 13:58:01.302701   24248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:58:01.319359   24248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:58:01.411251   24248 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:58:01.411323   24248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:58:01.438032   24248 cri.go:89] found id: "024fd155c71f400a95e31fc7ad96222849e4a688d5188b8418494fff998b02f8"
	I1121 13:58:01.438049   24248 cri.go:89] found id: "a45184ba10995c917c66320d64b28434d66ae84a7641d0c7c7c9435196c72b05"
	I1121 13:58:01.438053   24248 cri.go:89] found id: "c7cea569e79d903c00cfe8fa08fc613df7758703b3a3365f91c8a868e223391a"
	I1121 13:58:01.438058   24248 cri.go:89] found id: "f87ee9ca1eb0d9c8526d317e4709114848a958759fe6996309f72e558fcc76bd"
	I1121 13:58:01.438063   24248 cri.go:89] found id: "b69dec080641a7059fc510bee4a22d19e85c4143780d1fee7044f2fdb740f740"
	I1121 13:58:01.438067   24248 cri.go:89] found id: "733ab2d4f270d078eb3b7fb75ffde7d300a333a9eb60360fe6d7d27fb0875dd7"
	I1121 13:58:01.438071   24248 cri.go:89] found id: "6d89596515e60da3c6350bc1ead48b9dbbf7532b2b24203a2cac8a9359f130a5"
	I1121 13:58:01.438098   24248 cri.go:89] found id: "12a7ad65b01f342d9f1789252a52da86937f6e873020a7041cff37d3b28aaf6f"
	I1121 13:58:01.438106   24248 cri.go:89] found id: "3179500ac77197cbb987c594d2e651bf8097474fdf25f2ce1512534d31d41788"
	I1121 13:58:01.438111   24248 cri.go:89] found id: "994774c9ca4f6f8e3514b947f0b8fda8fa47d7542203f328719915376b9619b8"
	I1121 13:58:01.438113   24248 cri.go:89] found id: "32c8ebd6346418c59c61269d61198ed0c8fdb4a99d46cc2ba298869b17e82675"
	I1121 13:58:01.438116   24248 cri.go:89] found id: "56057aee31072304afba5ad58c29a181e243ff2e0f856de3cc6a72a06aa40534"
	I1121 13:58:01.438118   24248 cri.go:89] found id: "77eb40d30250c88e9becb416f1d606fd898acc0a86c49c8005d72a9268c0d3f1"
	I1121 13:58:01.438121   24248 cri.go:89] found id: "7d1e97c795b004c26d0da895539dc886fe57268b3dac72ee7d7de356e86f6014"
	I1121 13:58:01.438124   24248 cri.go:89] found id: "b3df341d90d52b5ef2ee3a00f8e67c97d074f486b504b70f4bd9ca36e586af13"
	I1121 13:58:01.438133   24248 cri.go:89] found id: "a862b6c84241dc48d722f3ee0bd89241e61135843ff33148c7f534cbf5f5680c"
	I1121 13:58:01.438135   24248 cri.go:89] found id: "7d8b7a3c495d7923335171c6b39b7aaad4571152b404188c58e3055e39def27a"
	I1121 13:58:01.438139   24248 cri.go:89] found id: "8a5df4965546d743393a4b617fc40ffb5e887a5847cf860fc0a2bf8ca9d53262"
	I1121 13:58:01.438142   24248 cri.go:89] found id: "7fb6fcbbcafef908292ed5964ecd5904153b8c7e75af509515148e2266c74ecc"
	I1121 13:58:01.438144   24248 cri.go:89] found id: "66f416a2611818263d3d82ca7e1c1dc51f8f8373c1ef7e282ba527f4ea204081"
	I1121 13:58:01.438148   24248 cri.go:89] found id: "b49596a0b2d4dc21f077381ec50d10f903e56d22d3f6c4701ecced574385ab52"
	I1121 13:58:01.438155   24248 cri.go:89] found id: "6bc0a23d21b59305e5afddf76ffb436c468bd915a8bb02d9d71592825dafd624"
	I1121 13:58:01.438160   24248 cri.go:89] found id: "61ca322c069424488425150fa2cb0785f8dae51cb585322a28bd81a85d0796f1"
	I1121 13:58:01.438167   24248 cri.go:89] found id: "19610d1d8120b612dcda4fbab26734a764faf1bebe73cc951524320b474dea8b"
	I1121 13:58:01.438172   24248 cri.go:89] found id: ""
	I1121 13:58:01.438209   24248 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:58:01.451613   24248 out.go:203] 
	W1121 13:58:01.452689   24248 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:58:01.452711   24248 out.go:285] * 
	* 
	W1121 13:58:01.457623   24248 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:58:01.458740   24248 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-243127 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-243127
helpers_test.go:243: (dbg) docker inspect addons-243127:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ec68ec4d468a95ec5b7d062b698336448aaad0a936ca78985ffe849253396a6",
	        "Created": "2025-11-21T13:56:12.147939743Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16559,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T13:56:12.179022991Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/1ec68ec4d468a95ec5b7d062b698336448aaad0a936ca78985ffe849253396a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ec68ec4d468a95ec5b7d062b698336448aaad0a936ca78985ffe849253396a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ec68ec4d468a95ec5b7d062b698336448aaad0a936ca78985ffe849253396a6/hosts",
	        "LogPath": "/var/lib/docker/containers/1ec68ec4d468a95ec5b7d062b698336448aaad0a936ca78985ffe849253396a6/1ec68ec4d468a95ec5b7d062b698336448aaad0a936ca78985ffe849253396a6-json.log",
	        "Name": "/addons-243127",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-243127:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-243127",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1ec68ec4d468a95ec5b7d062b698336448aaad0a936ca78985ffe849253396a6",
	                "LowerDir": "/var/lib/docker/overlay2/6d71564661e888343ffb73fedd00b3c321cf896f2e5f2566c0291ee2d3de8cea-init/diff:/var/lib/docker/overlay2/52a35d389e2d282a009a54c52e3f2c3f22a4d7ab5a2644a29d16127d21682576/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6d71564661e888343ffb73fedd00b3c321cf896f2e5f2566c0291ee2d3de8cea/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6d71564661e888343ffb73fedd00b3c321cf896f2e5f2566c0291ee2d3de8cea/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6d71564661e888343ffb73fedd00b3c321cf896f2e5f2566c0291ee2d3de8cea/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-243127",
	                "Source": "/var/lib/docker/volumes/addons-243127/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-243127",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-243127",
	                "name.minikube.sigs.k8s.io": "addons-243127",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8b8bdd0822504c4079d5cb8cc517c2980d440ada4412b4ecf716ef587aef2f4b",
	            "SandboxKey": "/var/run/docker/netns/8b8bdd082250",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-243127": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1c1ab31aac030cbbaad7777676044a32d96f21040721f0aee934d252b39cf533",
	                    "EndpointID": "631164ff79e4da350f56ada42b6efad7975f084b6ec5d08b273c5bd9649c0d0c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "22:43:38:04:42:7c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-243127",
	                        "1ec68ec4d468"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-243127 -n addons-243127
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-243127 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-243127 logs -n 25: (1.030569789s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-899209 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-899209   │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │ 21 Nov 25 13:55 UTC │
	│ delete  │ -p download-only-899209                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-899209   │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │ 21 Nov 25 13:55 UTC │
	│ start   │ -o=json --download-only -p download-only-145200 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-145200   │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │ 21 Nov 25 13:55 UTC │
	│ delete  │ -p download-only-145200                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-145200   │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │ 21 Nov 25 13:55 UTC │
	│ delete  │ -p download-only-899209                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-899209   │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │ 21 Nov 25 13:55 UTC │
	│ delete  │ -p download-only-145200                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-145200   │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │ 21 Nov 25 13:55 UTC │
	│ start   │ --download-only -p download-docker-032070 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-032070 │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │                     │
	│ delete  │ -p download-docker-032070                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-032070 │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │ 21 Nov 25 13:55 UTC │
	│ start   │ --download-only -p binary-mirror-248688 --alsologtostderr --binary-mirror http://127.0.0.1:45793 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-248688   │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │                     │
	│ delete  │ -p binary-mirror-248688                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-248688   │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │ 21 Nov 25 13:55 UTC │
	│ addons  │ disable dashboard -p addons-243127                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-243127          │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │                     │
	│ addons  │ enable dashboard -p addons-243127                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-243127          │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │                     │
	│ start   │ -p addons-243127 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-243127          │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │ 21 Nov 25 13:57 UTC │
	│ addons  │ addons-243127 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-243127          │ jenkins │ v1.37.0 │ 21 Nov 25 13:57 UTC │                     │
	│ addons  │ addons-243127 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-243127          │ jenkins │ v1.37.0 │ 21 Nov 25 13:58 UTC │                     │
	│ addons  │ enable headlamp -p addons-243127 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-243127          │ jenkins │ v1.37.0 │ 21 Nov 25 13:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 13:55:49
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 13:55:49.057153   15892 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:55:49.057381   15892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:55:49.057397   15892 out.go:374] Setting ErrFile to fd 2...
	I1121 13:55:49.057403   15892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:55:49.057632   15892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 13:55:49.058141   15892 out.go:368] Setting JSON to false
	I1121 13:55:49.058949   15892 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2298,"bootTime":1763731051,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 13:55:49.059017   15892 start.go:143] virtualization: kvm guest
	I1121 13:55:49.060484   15892 out.go:179] * [addons-243127] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 13:55:49.061589   15892 notify.go:221] Checking for updates...
	I1121 13:55:49.061609   15892 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 13:55:49.062745   15892 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 13:55:49.063978   15892 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 13:55:49.065043   15892 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 13:55:49.066036   15892 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 13:55:49.067032   15892 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 13:55:49.068706   15892 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 13:55:49.091488   15892 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 13:55:49.091591   15892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:55:49.147037   15892 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-21 13:55:49.138675185 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 13:55:49.147139   15892 docker.go:319] overlay module found
	I1121 13:55:49.148616   15892 out.go:179] * Using the docker driver based on user configuration
	I1121 13:55:49.149629   15892 start.go:309] selected driver: docker
	I1121 13:55:49.149640   15892 start.go:930] validating driver "docker" against <nil>
	I1121 13:55:49.149649   15892 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 13:55:49.150161   15892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:55:49.200423   15892 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-21 13:55:49.191843813 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 13:55:49.200588   15892 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 13:55:49.200811   15892 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 13:55:49.202182   15892 out.go:179] * Using Docker driver with root privileges
	I1121 13:55:49.203206   15892 cni.go:84] Creating CNI manager for ""
	I1121 13:55:49.203256   15892 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 13:55:49.203265   15892 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 13:55:49.203315   15892 start.go:353] cluster config:
	{Name:addons-243127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-243127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1121 13:55:49.204405   15892 out.go:179] * Starting "addons-243127" primary control-plane node in "addons-243127" cluster
	I1121 13:55:49.205367   15892 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 13:55:49.206479   15892 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 13:55:49.207514   15892 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 13:55:49.207544   15892 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1121 13:55:49.207554   15892 cache.go:65] Caching tarball of preloaded images
	I1121 13:55:49.207586   15892 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 13:55:49.207655   15892 preload.go:238] Found /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1121 13:55:49.207669   15892 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 13:55:49.207996   15892 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/config.json ...
	I1121 13:55:49.208023   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/config.json: {Name:mkce95b84801e2e0b8601121d7dfde29ce254004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:55:49.222636   15892 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a to local cache
	I1121 13:55:49.222726   15892 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory
	I1121 13:55:49.222745   15892 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local cache directory, skipping pull
	I1121 13:55:49.222751   15892 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in cache, skipping pull
	I1121 13:55:49.222764   15892 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a as a tarball
	I1121 13:55:49.222774   15892 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from local cache
	I1121 13:56:01.080194   15892 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a from cached tarball
	I1121 13:56:01.080232   15892 cache.go:243] Successfully downloaded all kic artifacts
	I1121 13:56:01.080284   15892 start.go:360] acquireMachinesLock for addons-243127: {Name:mkea124a4b7a8ba801648345708233fc7b1fdc41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 13:56:01.080387   15892 start.go:364] duration metric: took 80.734µs to acquireMachinesLock for "addons-243127"
	I1121 13:56:01.080416   15892 start.go:93] Provisioning new machine with config: &{Name:addons-243127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-243127 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 13:56:01.080509   15892 start.go:125] createHost starting for "" (driver="docker")
	I1121 13:56:01.082193   15892 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1121 13:56:01.082411   15892 start.go:159] libmachine.API.Create for "addons-243127" (driver="docker")
	I1121 13:56:01.082447   15892 client.go:173] LocalClient.Create starting
	I1121 13:56:01.082545   15892 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem
	I1121 13:56:01.258168   15892 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem
	I1121 13:56:01.344661   15892 cli_runner.go:164] Run: docker network inspect addons-243127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 13:56:01.360952   15892 cli_runner.go:211] docker network inspect addons-243127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 13:56:01.361010   15892 network_create.go:284] running [docker network inspect addons-243127] to gather additional debugging logs...
	I1121 13:56:01.361031   15892 cli_runner.go:164] Run: docker network inspect addons-243127
	W1121 13:56:01.375481   15892 cli_runner.go:211] docker network inspect addons-243127 returned with exit code 1
	I1121 13:56:01.375504   15892 network_create.go:287] error running [docker network inspect addons-243127]: docker network inspect addons-243127: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-243127 not found
	I1121 13:56:01.375518   15892 network_create.go:289] output of [docker network inspect addons-243127]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-243127 not found
	
	** /stderr **
	I1121 13:56:01.375650   15892 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 13:56:01.390468   15892 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dd9820}
	I1121 13:56:01.390503   15892 network_create.go:124] attempt to create docker network addons-243127 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1121 13:56:01.390547   15892 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-243127 addons-243127
	I1121 13:56:01.432170   15892 network_create.go:108] docker network addons-243127 192.168.49.0/24 created
	I1121 13:56:01.432206   15892 kic.go:121] calculated static IP "192.168.49.2" for the "addons-243127" container
	I1121 13:56:01.432268   15892 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 13:56:01.446919   15892 cli_runner.go:164] Run: docker volume create addons-243127 --label name.minikube.sigs.k8s.io=addons-243127 --label created_by.minikube.sigs.k8s.io=true
	I1121 13:56:01.462258   15892 oci.go:103] Successfully created a docker volume addons-243127
	I1121 13:56:01.462323   15892 cli_runner.go:164] Run: docker run --rm --name addons-243127-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-243127 --entrypoint /usr/bin/test -v addons-243127:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 13:56:07.853063   15892 cli_runner.go:217] Completed: docker run --rm --name addons-243127-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-243127 --entrypoint /usr/bin/test -v addons-243127:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib: (6.390702863s)
	I1121 13:56:07.853088   15892 oci.go:107] Successfully prepared a docker volume addons-243127
	I1121 13:56:07.853137   15892 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 13:56:07.853149   15892 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 13:56:07.853203   15892 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-243127:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 13:56:12.081204   15892 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-243127:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.227968178s)
	I1121 13:56:12.081230   15892 kic.go:203] duration metric: took 4.22807771s to extract preloaded images to volume ...
	W1121 13:56:12.081301   15892 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1121 13:56:12.081348   15892 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1121 13:56:12.081384   15892 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 13:56:12.133464   15892 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-243127 --name addons-243127 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-243127 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-243127 --network addons-243127 --ip 192.168.49.2 --volume addons-243127:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 13:56:12.421642   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Running}}
	I1121 13:56:12.439685   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:12.458665   15892 cli_runner.go:164] Run: docker exec addons-243127 stat /var/lib/dpkg/alternatives/iptables
	I1121 13:56:12.505446   15892 oci.go:144] the created container "addons-243127" has a running status.
	I1121 13:56:12.505481   15892 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa...
	I1121 13:56:12.631777   15892 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 13:56:12.659186   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:12.680107   15892 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 13:56:12.680124   15892 kic_runner.go:114] Args: [docker exec --privileged addons-243127 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 13:56:12.724665   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:12.747889   15892 machine.go:94] provisionDockerMachine start ...
	I1121 13:56:12.747996   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:12.767339   15892 main.go:143] libmachine: Using SSH client type: native
	I1121 13:56:12.767642   15892 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1121 13:56:12.767666   15892 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 13:56:12.899016   15892 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-243127
	
	I1121 13:56:12.899040   15892 ubuntu.go:182] provisioning hostname "addons-243127"
	I1121 13:56:12.899093   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:12.916800   15892 main.go:143] libmachine: Using SSH client type: native
	I1121 13:56:12.917000   15892 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1121 13:56:12.917017   15892 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-243127 && echo "addons-243127" | sudo tee /etc/hostname
	I1121 13:56:13.053816   15892 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-243127
	
	I1121 13:56:13.053905   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:13.072412   15892 main.go:143] libmachine: Using SSH client type: native
	I1121 13:56:13.072637   15892 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1121 13:56:13.072655   15892 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-243127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-243127/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-243127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 13:56:13.197111   15892 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 13:56:13.197134   15892 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11045/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11045/.minikube}
	I1121 13:56:13.197151   15892 ubuntu.go:190] setting up certificates
	I1121 13:56:13.197162   15892 provision.go:84] configureAuth start
	I1121 13:56:13.197219   15892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-243127
	I1121 13:56:13.213450   15892 provision.go:143] copyHostCerts
	I1121 13:56:13.213522   15892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem (1078 bytes)
	I1121 13:56:13.213653   15892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem (1123 bytes)
	I1121 13:56:13.213720   15892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem (1679 bytes)
	I1121 13:56:13.213777   15892 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem org=jenkins.addons-243127 san=[127.0.0.1 192.168.49.2 addons-243127 localhost minikube]
	I1121 13:56:13.336773   15892 provision.go:177] copyRemoteCerts
	I1121 13:56:13.336818   15892 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 13:56:13.336848   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:13.353740   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:13.445589   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 13:56:13.462743   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1121 13:56:13.478249   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 13:56:13.493470   15892 provision.go:87] duration metric: took 296.286511ms to configureAuth
	I1121 13:56:13.493488   15892 ubuntu.go:206] setting minikube options for container-runtime
	I1121 13:56:13.493679   15892 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:56:13.493811   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:13.509651   15892 main.go:143] libmachine: Using SSH client type: native
	I1121 13:56:13.509857   15892 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1121 13:56:13.509881   15892 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 13:56:13.760189   15892 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 13:56:13.760214   15892 machine.go:97] duration metric: took 1.012300095s to provisionDockerMachine
	I1121 13:56:13.760224   15892 client.go:176] duration metric: took 12.6777646s to LocalClient.Create
	I1121 13:56:13.760242   15892 start.go:167] duration metric: took 12.677831425s to libmachine.API.Create "addons-243127"
	I1121 13:56:13.760252   15892 start.go:293] postStartSetup for "addons-243127" (driver="docker")
	I1121 13:56:13.760263   15892 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 13:56:13.760314   15892 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 13:56:13.760361   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:13.776420   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:13.868934   15892 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 13:56:13.872119   15892 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 13:56:13.872143   15892 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 13:56:13.872152   15892 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/addons for local assets ...
	I1121 13:56:13.872206   15892 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/files for local assets ...
	I1121 13:56:13.872233   15892 start.go:296] duration metric: took 111.975104ms for postStartSetup
	I1121 13:56:13.872494   15892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-243127
	I1121 13:56:13.889054   15892 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/config.json ...
	I1121 13:56:13.889320   15892 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 13:56:13.889367   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:13.904631   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:13.993646   15892 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 13:56:13.998006   15892 start.go:128] duration metric: took 12.917483058s to createHost
	I1121 13:56:13.998022   15892 start.go:83] releasing machines lock for "addons-243127", held for 12.91762329s
	I1121 13:56:13.998082   15892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-243127
	I1121 13:56:14.013323   15892 ssh_runner.go:195] Run: cat /version.json
	I1121 13:56:14.013359   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:14.013413   15892 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 13:56:14.013467   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:14.029416   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:14.031616   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:14.178855   15892 ssh_runner.go:195] Run: systemctl --version
	I1121 13:56:14.184530   15892 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 13:56:14.215444   15892 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 13:56:14.219879   15892 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 13:56:14.219930   15892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 13:56:14.243256   15892 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 13:56:14.243276   15892 start.go:496] detecting cgroup driver to use...
	I1121 13:56:14.243306   15892 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 13:56:14.243345   15892 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 13:56:14.257146   15892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 13:56:14.267623   15892 docker.go:218] disabling cri-docker service (if available) ...
	I1121 13:56:14.267672   15892 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 13:56:14.281734   15892 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 13:56:14.297135   15892 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 13:56:14.375212   15892 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 13:56:14.456799   15892 docker.go:234] disabling docker service ...
	I1121 13:56:14.456861   15892 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 13:56:14.473705   15892 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 13:56:14.484331   15892 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 13:56:14.561730   15892 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 13:56:14.637031   15892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 13:56:14.647498   15892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 13:56:14.659765   15892 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 13:56:14.659818   15892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:14.668839   15892 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1121 13:56:14.668881   15892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:14.676341   15892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:14.683945   15892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:14.691259   15892 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 13:56:14.698092   15892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:14.705424   15892 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:14.716901   15892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 13:56:14.724286   15892 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 13:56:14.730656   15892 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1121 13:56:14.730698   15892 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1121 13:56:14.741242   15892 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 13:56:14.747538   15892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 13:56:14.820143   15892 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 13:56:14.942432   15892 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 13:56:14.942488   15892 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 13:56:14.945985   15892 start.go:564] Will wait 60s for crictl version
	I1121 13:56:14.946030   15892 ssh_runner.go:195] Run: which crictl
	I1121 13:56:14.949177   15892 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 13:56:14.972995   15892 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 13:56:14.973077   15892 ssh_runner.go:195] Run: crio --version
	I1121 13:56:14.997307   15892 ssh_runner.go:195] Run: crio --version
	I1121 13:56:15.023918   15892 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 13:56:15.024984   15892 cli_runner.go:164] Run: docker network inspect addons-243127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 13:56:15.040870   15892 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1121 13:56:15.044477   15892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 13:56:15.053821   15892 kubeadm.go:884] updating cluster {Name:addons-243127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-243127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 13:56:15.053923   15892 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 13:56:15.053966   15892 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 13:56:15.082096   15892 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 13:56:15.082112   15892 crio.go:433] Images already preloaded, skipping extraction
	I1121 13:56:15.082152   15892 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 13:56:15.103636   15892 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 13:56:15.103653   15892 cache_images.go:86] Images are preloaded, skipping loading
	I1121 13:56:15.103659   15892 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1121 13:56:15.103740   15892 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-243127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-243127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 13:56:15.103792   15892 ssh_runner.go:195] Run: crio config
	I1121 13:56:15.144422   15892 cni.go:84] Creating CNI manager for ""
	I1121 13:56:15.144439   15892 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 13:56:15.144455   15892 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 13:56:15.144476   15892 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-243127 NodeName:addons-243127 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 13:56:15.144615   15892 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-243127"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 13:56:15.144664   15892 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 13:56:15.151720   15892 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 13:56:15.151763   15892 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 13:56:15.159128   15892 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1121 13:56:15.170317   15892 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 13:56:15.183701   15892 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1121 13:56:15.194636   15892 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1121 13:56:15.197680   15892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 13:56:15.206303   15892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 13:56:15.278921   15892 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 13:56:15.300342   15892 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127 for IP: 192.168.49.2
	I1121 13:56:15.300356   15892 certs.go:195] generating shared ca certs ...
	I1121 13:56:15.300369   15892 certs.go:227] acquiring lock for ca certs: {Name:mkde3a7d6f17b238f06eab3a140993599f1b4367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:15.300477   15892 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key
	I1121 13:56:15.471299   15892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt ...
	I1121 13:56:15.471325   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt: {Name:mk61b49fa89e084ba2749969322820f2bb2c6d21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:15.471494   15892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key ...
	I1121 13:56:15.471510   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key: {Name:mke7fb5f0ae9e7ba8c7140d87cbc59455899f32a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:15.471630   15892 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key
	I1121 13:56:15.636314   15892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.crt ...
	I1121 13:56:15.636339   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.crt: {Name:mk0a574429af51245df02d07a08a97d85f76ece6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:15.636483   15892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key ...
	I1121 13:56:15.636494   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key: {Name:mk2a9cdd54e0b1b68111efd8b987f1d2a79ad5cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:15.636580   15892 certs.go:257] generating profile certs ...
	I1121 13:56:15.636636   15892 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.key
	I1121 13:56:15.636650   15892 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt with IP's: []
	I1121 13:56:15.781577   15892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt ...
	I1121 13:56:15.781599   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: {Name:mk1d9c1991e5dfc8fd2703c373557eebcfd0a745 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:15.781734   15892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.key ...
	I1121 13:56:15.781744   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.key: {Name:mk5f74bf97b7db47fe3a4f6a5e196a3f3088b2ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:15.781808   15892 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.key.285be2fb
	I1121 13:56:15.781825   15892 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.crt.285be2fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1121 13:56:16.010672   15892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.crt.285be2fb ...
	I1121 13:56:16.010694   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.crt.285be2fb: {Name:mk90ccdf18068edc086dc7f222dd06f21dbf5c8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:16.010840   15892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.key.285be2fb ...
	I1121 13:56:16.010853   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.key.285be2fb: {Name:mkb3b2eb936646a224b260dbeb3c4c9ffc2b4d76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:16.010919   15892 certs.go:382] copying /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.crt.285be2fb -> /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.crt
	I1121 13:56:16.010987   15892 certs.go:386] copying /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.key.285be2fb -> /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.key
	I1121 13:56:16.011034   15892 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/proxy-client.key
	I1121 13:56:16.011050   15892 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/proxy-client.crt with IP's: []
	I1121 13:56:16.271488   15892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/proxy-client.crt ...
	I1121 13:56:16.271519   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/proxy-client.crt: {Name:mk55ebbc2f27359aac3b7bea8e90ef2f44a5f8c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:16.271675   15892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/proxy-client.key ...
	I1121 13:56:16.271686   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/proxy-client.key: {Name:mk052d62d16404c6504555f85be9a6b81ddecae7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:16.271855   15892 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 13:56:16.271887   15892 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem (1078 bytes)
	I1121 13:56:16.271907   15892 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem (1123 bytes)
	I1121 13:56:16.271930   15892 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem (1679 bytes)
	I1121 13:56:16.272486   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 13:56:16.289306   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 13:56:16.305008   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 13:56:16.320330   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1121 13:56:16.335307   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1121 13:56:16.350489   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 13:56:16.366092   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 13:56:16.381039   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 13:56:16.396223   15892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 13:56:16.412959   15892 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 13:56:16.424035   15892 ssh_runner.go:195] Run: openssl version
	I1121 13:56:16.429259   15892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 13:56:16.438455   15892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 13:56:16.441659   15892 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 13:56:16.441704   15892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 13:56:16.474470   15892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 13:56:16.481922   15892 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 13:56:16.484952   15892 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 13:56:16.484993   15892 kubeadm.go:401] StartCluster: {Name:addons-243127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-243127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 13:56:16.485050   15892 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:56:16.485082   15892 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:56:16.509135   15892 cri.go:89] found id: ""
	I1121 13:56:16.509177   15892 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 13:56:16.515988   15892 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 13:56:16.522710   15892 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 13:56:16.522749   15892 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 13:56:16.529292   15892 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 13:56:16.529309   15892 kubeadm.go:158] found existing configuration files:
	
	I1121 13:56:16.529348   15892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 13:56:16.535778   15892 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 13:56:16.535812   15892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 13:56:16.542224   15892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 13:56:16.548661   15892 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 13:56:16.548705   15892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 13:56:16.554972   15892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 13:56:16.561551   15892 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 13:56:16.561609   15892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 13:56:16.567780   15892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 13:56:16.574277   15892 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 13:56:16.574318   15892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 13:56:16.580489   15892 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 13:56:16.612951   15892 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 13:56:16.613048   15892 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 13:56:16.632087   15892 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 13:56:16.632179   15892 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 13:56:16.632253   15892 kubeadm.go:319] OS: Linux
	I1121 13:56:16.632298   15892 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 13:56:16.632366   15892 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 13:56:16.632428   15892 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 13:56:16.632478   15892 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 13:56:16.632567   15892 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 13:56:16.632643   15892 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 13:56:16.632733   15892 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 13:56:16.632780   15892 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 13:56:16.681905   15892 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 13:56:16.682066   15892 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 13:56:16.682233   15892 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 13:56:16.688772   15892 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 13:56:16.691049   15892 out.go:252]   - Generating certificates and keys ...
	I1121 13:56:16.691143   15892 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 13:56:16.691233   15892 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 13:56:16.801776   15892 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 13:56:16.973155   15892 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 13:56:17.276734   15892 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 13:56:17.397878   15892 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 13:56:17.706843   15892 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 13:56:17.707001   15892 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-243127 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1121 13:56:18.151055   15892 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 13:56:18.151216   15892 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-243127 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1121 13:56:18.290947   15892 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 13:56:18.430148   15892 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 13:56:19.049420   15892 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 13:56:19.049512   15892 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 13:56:19.338837   15892 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 13:56:19.437878   15892 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 13:56:19.651247   15892 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 13:56:19.802783   15892 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 13:56:20.356874   15892 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 13:56:20.357347   15892 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 13:56:20.360907   15892 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 13:56:20.362394   15892 out.go:252]   - Booting up control plane ...
	I1121 13:56:20.362474   15892 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 13:56:20.362543   15892 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 13:56:20.362948   15892 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 13:56:20.375185   15892 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 13:56:20.375294   15892 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 13:56:20.381073   15892 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 13:56:20.381362   15892 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 13:56:20.381441   15892 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 13:56:20.472352   15892 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 13:56:20.472493   15892 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 13:56:20.973332   15892 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.053268ms
	I1121 13:56:20.976151   15892 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 13:56:20.976271   15892 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1121 13:56:20.976364   15892 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 13:56:20.976440   15892 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 13:56:22.097814   15892 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.121478607s
	I1121 13:56:22.759437   15892 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.783162469s
	I1121 13:56:24.477965   15892 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501711789s
	I1121 13:56:24.487731   15892 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 13:56:24.496049   15892 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 13:56:24.503673   15892 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 13:56:24.503960   15892 kubeadm.go:319] [mark-control-plane] Marking the node addons-243127 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 13:56:24.511144   15892 kubeadm.go:319] [bootstrap-token] Using token: zw15bu.qcstwz6fx1p3zpbt
	I1121 13:56:24.512387   15892 out.go:252]   - Configuring RBAC rules ...
	I1121 13:56:24.512524   15892 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 13:56:24.514841   15892 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 13:56:24.519016   15892 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 13:56:24.521766   15892 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 13:56:24.523748   15892 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 13:56:24.525654   15892 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 13:56:24.883187   15892 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 13:56:25.297065   15892 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 13:56:25.882940   15892 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 13:56:25.883777   15892 kubeadm.go:319] 
	I1121 13:56:25.883865   15892 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 13:56:25.883874   15892 kubeadm.go:319] 
	I1121 13:56:25.883976   15892 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 13:56:25.883997   15892 kubeadm.go:319] 
	I1121 13:56:25.884030   15892 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 13:56:25.884118   15892 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 13:56:25.884202   15892 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 13:56:25.884212   15892 kubeadm.go:319] 
	I1121 13:56:25.884276   15892 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 13:56:25.884287   15892 kubeadm.go:319] 
	I1121 13:56:25.884354   15892 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 13:56:25.884363   15892 kubeadm.go:319] 
	I1121 13:56:25.884434   15892 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 13:56:25.884531   15892 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 13:56:25.884647   15892 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 13:56:25.884662   15892 kubeadm.go:319] 
	I1121 13:56:25.884793   15892 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 13:56:25.884867   15892 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 13:56:25.884874   15892 kubeadm.go:319] 
	I1121 13:56:25.884976   15892 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zw15bu.qcstwz6fx1p3zpbt \
	I1121 13:56:25.885118   15892 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f61f1a5a9a2c6e402420e419bcf82211dd9cf42c2d71b101000a986289f66d60 \
	I1121 13:56:25.885140   15892 kubeadm.go:319] 	--control-plane 
	I1121 13:56:25.885147   15892 kubeadm.go:319] 
	I1121 13:56:25.885259   15892 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 13:56:25.885269   15892 kubeadm.go:319] 
	I1121 13:56:25.885344   15892 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zw15bu.qcstwz6fx1p3zpbt \
	I1121 13:56:25.885469   15892 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f61f1a5a9a2c6e402420e419bcf82211dd9cf42c2d71b101000a986289f66d60 
	I1121 13:56:25.887390   15892 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 13:56:25.887548   15892 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 13:56:25.887607   15892 cni.go:84] Creating CNI manager for ""
	I1121 13:56:25.887626   15892 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 13:56:25.889059   15892 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 13:56:25.890142   15892 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 13:56:25.894147   15892 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 13:56:25.894161   15892 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 13:56:25.906093   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 13:56:26.088873   15892 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 13:56:26.088959   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:26.088974   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-243127 minikube.k8s.io/updated_at=2025_11_21T13_56_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=addons-243127 minikube.k8s.io/primary=true
	I1121 13:56:26.167598   15892 ops.go:34] apiserver oom_adj: -16
	I1121 13:56:26.167695   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:26.667916   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:27.168465   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:27.668036   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:28.168585   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:28.667821   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:29.168454   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:29.667892   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:30.168312   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:30.668161   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:31.167740   15892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 13:56:31.226960   15892 kubeadm.go:1114] duration metric: took 5.138055817s to wait for elevateKubeSystemPrivileges
	I1121 13:56:31.227003   15892 kubeadm.go:403] duration metric: took 14.742010813s to StartCluster
	I1121 13:56:31.227023   15892 settings.go:142] acquiring lock: {Name:mkb207cf001a407898b2dbfd9fb9b3881f173a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:31.227128   15892 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 13:56:31.227546   15892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 13:56:31.227738   15892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 13:56:31.227761   15892 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 13:56:31.227815   15892 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1121 13:56:31.227950   15892 addons.go:70] Setting gcp-auth=true in profile "addons-243127"
	I1121 13:56:31.227963   15892 addons.go:70] Setting ingress-dns=true in profile "addons-243127"
	I1121 13:56:31.227978   15892 addons.go:239] Setting addon ingress-dns=true in "addons-243127"
	I1121 13:56:31.227983   15892 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-243127"
	I1121 13:56:31.227984   15892 addons.go:70] Setting registry-creds=true in profile "addons-243127"
	I1121 13:56:31.227995   15892 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-243127"
	I1121 13:56:31.228013   15892 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:56:31.228030   15892 addons.go:70] Setting inspektor-gadget=true in profile "addons-243127"
	I1121 13:56:31.228037   15892 addons.go:70] Setting volcano=true in profile "addons-243127"
	I1121 13:56:31.228023   15892 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-243127"
	I1121 13:56:31.228032   15892 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-243127"
	I1121 13:56:31.228047   15892 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-243127"
	I1121 13:56:31.228052   15892 addons.go:70] Setting volumesnapshots=true in profile "addons-243127"
	I1121 13:56:31.227978   15892 mustload.go:66] Loading cluster: addons-243127
	I1121 13:56:31.228068   15892 addons.go:70] Setting registry=true in profile "addons-243127"
	I1121 13:56:31.228083   15892 addons.go:239] Setting addon registry=true in "addons-243127"
	I1121 13:56:31.228088   15892 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-243127"
	I1121 13:56:31.228107   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.228118   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.228121   15892 addons.go:239] Setting addon volumesnapshots=true in "addons-243127"
	I1121 13:56:31.228127   15892 addons.go:70] Setting cloud-spanner=true in profile "addons-243127"
	I1121 13:56:31.228141   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.228144   15892 addons.go:239] Setting addon cloud-spanner=true in "addons-243127"
	I1121 13:56:31.228181   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.228219   15892 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:56:31.228253   15892 addons.go:70] Setting metrics-server=true in profile "addons-243127"
	I1121 13:56:31.228286   15892 addons.go:239] Setting addon metrics-server=true in "addons-243127"
	I1121 13:56:31.228318   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.228048   15892 addons.go:239] Setting addon volcano=true in "addons-243127"
	I1121 13:56:31.228408   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.228482   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.228674   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.228121   15892 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-243127"
	I1121 13:56:31.228778   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.228792   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.229046   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.227950   15892 addons.go:70] Setting yakd=true in profile "addons-243127"
	I1121 13:56:31.228043   15892 addons.go:239] Setting addon inspektor-gadget=true in "addons-243127"
	I1121 13:56:31.228061   15892 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-243127"
	I1121 13:56:31.227958   15892 addons.go:70] Setting ingress=true in profile "addons-243127"
	I1121 13:56:31.228675   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.228022   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.229294   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.228674   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.229626   15892 out.go:179] * Verifying Kubernetes components...
	I1121 13:56:31.229711   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.229644   15892 addons.go:239] Setting addon yakd=true in "addons-243127"
	I1121 13:56:31.229998   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.230003   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.230490   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.228038   15892 addons.go:70] Setting default-storageclass=true in profile "addons-243127"
	I1121 13:56:31.233034   15892 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-243127"
	I1121 13:56:31.233338   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.233704   15892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 13:56:31.233822   15892 addons.go:239] Setting addon ingress=true in "addons-243127"
	I1121 13:56:31.233860   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.229626   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.234649   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.228024   15892 addons.go:239] Setting addon registry-creds=true in "addons-243127"
	I1121 13:56:31.236578   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.228026   15892 addons.go:70] Setting storage-provisioner=true in profile "addons-243127"
	I1121 13:56:31.237338   15892 addons.go:239] Setting addon storage-provisioner=true in "addons-243127"
	I1121 13:56:31.237361   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.237612   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.237842   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.238752   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.228022   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.240213   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.256946   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.294288   15892 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1121 13:56:31.294364   15892 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1121 13:56:31.295982   15892 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 13:56:31.296061   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1121 13:56:31.296034   15892 out.go:179]   - Using image docker.io/registry:3.0.0
	I1121 13:56:31.296245   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.298955   15892 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1121 13:56:31.298972   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1121 13:56:31.299021   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.310218   15892 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1121 13:56:31.311377   15892 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1121 13:56:31.312427   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1121 13:56:31.311385   15892 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1121 13:56:31.312679   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.313505   15892 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 13:56:31.313522   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1121 13:56:31.313587   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.315995   15892 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1121 13:56:31.317805   15892 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1121 13:56:31.318731   15892 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1121 13:56:31.318747   15892 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1121 13:56:31.318819   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.319357   15892 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1121 13:56:31.319371   15892 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1121 13:56:31.319506   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.320034   15892 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 13:56:31.321019   15892 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 13:56:31.321037   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 13:56:31.321083   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.321545   15892 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-243127"
	I1121 13:56:31.321642   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.322150   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.322473   15892 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1121 13:56:31.326348   15892 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1121 13:56:31.327348   15892 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1121 13:56:31.328375   15892 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1121 13:56:31.330379   15892 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1121 13:56:31.330590   15892 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1121 13:56:31.331397   15892 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 13:56:31.331416   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1121 13:56:31.331469   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.332484   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.333571   15892 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1121 13:56:31.334623   15892 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1121 13:56:31.336147   15892 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1121 13:56:31.337249   15892 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1121 13:56:31.337815   15892 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1121 13:56:31.337830   15892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1121 13:56:31.337904   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.338767   15892 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 13:56:31.338792   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1121 13:56:31.339035   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	W1121 13:56:31.343647   15892 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1121 13:56:31.350700   15892 addons.go:239] Setting addon default-storageclass=true in "addons-243127"
	I1121 13:56:31.350743   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:31.351262   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:31.368340   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.368646   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.369115   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.369287   15892 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1121 13:56:31.369352   15892 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1121 13:56:31.372227   15892 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1121 13:56:31.372249   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1121 13:56:31.372298   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.372498   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.376576   15892 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1121 13:56:31.376979   15892 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 13:56:31.377040   15892 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1121 13:56:31.378028   15892 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1121 13:56:31.378046   15892 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1121 13:56:31.378106   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.382280   15892 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 13:56:31.382610   15892 out.go:179]   - Using image docker.io/busybox:stable
	I1121 13:56:31.385186   15892 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 13:56:31.385237   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1121 13:56:31.385348   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.385598   15892 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 13:56:31.385655   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1121 13:56:31.385857   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.399732   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.399869   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.408281   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.408769   15892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 13:56:31.411274   15892 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 13:56:31.411375   15892 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 13:56:31.411474   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:31.411465   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.413862   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.422517   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.428924   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	W1121 13:56:31.432908   15892 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 13:56:31.432973   15892 retry.go:31] will retry after 357.87567ms: ssh: handshake failed: EOF
	I1121 13:56:31.440097   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.445769   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.448763   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:31.454266   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	W1121 13:56:31.456835   15892 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 13:56:31.456884   15892 retry.go:31] will retry after 156.476315ms: ssh: handshake failed: EOF
	I1121 13:56:31.460891   15892 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 13:56:31.557167   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 13:56:31.558024   15892 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1121 13:56:31.558086   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1121 13:56:31.564964   15892 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1121 13:56:31.565049   15892 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1121 13:56:31.568272   15892 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1121 13:56:31.568290   15892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1121 13:56:31.569187   15892 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1121 13:56:31.569200   15892 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1121 13:56:31.575914   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 13:56:31.577748   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 13:56:31.581650   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 13:56:31.584925   15892 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1121 13:56:31.584941   15892 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1121 13:56:31.587476   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1121 13:56:31.593026   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 13:56:31.598722   15892 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1121 13:56:31.598779   15892 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1121 13:56:31.604371   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1121 13:56:31.610281   15892 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1121 13:56:31.610299   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1121 13:56:31.612243   15892 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1121 13:56:31.612259   15892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1121 13:56:31.620306   15892 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1121 13:56:31.620323   15892 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1121 13:56:31.623257   15892 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 13:56:31.623334   15892 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1121 13:56:31.624861   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 13:56:31.647233   15892 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1121 13:56:31.647258   15892 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1121 13:56:31.655809   15892 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1121 13:56:31.655886   15892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1121 13:56:31.658906   15892 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1121 13:56:31.658979   15892 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1121 13:56:31.670662   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 13:56:31.671704   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1121 13:56:31.700275   15892 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1121 13:56:31.700300   15892 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1121 13:56:31.701033   15892 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1121 13:56:31.701094   15892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1121 13:56:31.708054   15892 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1121 13:56:31.708077   15892 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1121 13:56:31.751710   15892 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1121 13:56:31.751732   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1121 13:56:31.753543   15892 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1121 13:56:31.753612   15892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1121 13:56:31.760585   15892 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1121 13:56:31.762326   15892 node_ready.go:35] waiting up to 6m0s for node "addons-243127" to be "Ready" ...
	I1121 13:56:31.770035   15892 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 13:56:31.770439   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1121 13:56:31.814691   15892 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1121 13:56:31.814776   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1121 13:56:31.817370   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 13:56:31.820220   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 13:56:31.821409   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1121 13:56:31.869538   15892 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1121 13:56:31.869655   15892 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1121 13:56:31.922583   15892 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1121 13:56:31.922674   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1121 13:56:31.965555   15892 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1121 13:56:31.965600   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1121 13:56:32.006699   15892 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1121 13:56:32.006731   15892 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1121 13:56:32.025981   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 13:56:32.065751   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1121 13:56:32.271332   15892 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-243127" context rescaled to 1 replicas
	I1121 13:56:32.534371   15892 addons.go:495] Verifying addon metrics-server=true in "addons-243127"
	I1121 13:56:32.534422   15892 addons.go:495] Verifying addon registry=true in "addons-243127"
	I1121 13:56:32.535743   15892 out.go:179] * Verifying registry addon...
	I1121 13:56:32.538082   15892 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1121 13:56:32.540726   15892 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1121 13:56:32.540744   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:33.041211   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:33.062627   15892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.245153808s)
	I1121 13:56:33.062674   15892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.242344407s)
	W1121 13:56:33.062709   15892 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 13:56:33.062759   15892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.241201565s)
	I1121 13:56:33.062768   15892 retry.go:31] will retry after 271.534933ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 13:56:33.062896   15892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.036878015s)
	I1121 13:56:33.062919   15892 addons.go:495] Verifying addon ingress=true in "addons-243127"
	I1121 13:56:33.063134   15892 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-243127"
	I1121 13:56:33.064409   15892 out.go:179] * Verifying csi-hostpath-driver addon...
	I1121 13:56:33.064473   15892 out.go:179] * Verifying ingress addon...
	I1121 13:56:33.064482   15892 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-243127 service yakd-dashboard -n yakd-dashboard
	
	I1121 13:56:33.066480   15892 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1121 13:56:33.067201   15892 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1121 13:56:33.068620   15892 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1121 13:56:33.068638   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:33.070789   15892 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1121 13:56:33.070807   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:33.334420   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 13:56:33.541330   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:33.569224   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:33.569362   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:33.765065   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:34.040915   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:34.069106   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:34.069217   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:34.540271   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:34.569216   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:34.569295   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:35.041172   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:35.069223   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:35.069280   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:35.540607   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:35.641698   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:35.641813   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:35.772780   15892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.438316046s)
	I1121 13:56:36.041592   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:36.068618   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:36.069600   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:36.265192   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:36.540934   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:36.568722   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:36.569760   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:37.040388   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:37.069377   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:37.069446   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:37.540423   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:37.569355   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:37.569398   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:38.040259   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:38.069400   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:38.069449   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:38.540932   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:38.568774   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:38.569840   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:38.764167   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:38.937855   15892 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1121 13:56:38.937925   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:38.954895   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:39.041950   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:39.052512   15892 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1121 13:56:39.064137   15892 addons.go:239] Setting addon gcp-auth=true in "addons-243127"
	I1121 13:56:39.064183   15892 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:56:39.064515   15892 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:56:39.068837   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:39.069939   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:39.081463   15892 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1121 13:56:39.081508   15892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:56:39.097283   15892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:56:39.187394   15892 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 13:56:39.188465   15892 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1121 13:56:39.189730   15892 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1121 13:56:39.189746   15892 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1121 13:56:39.201780   15892 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1121 13:56:39.201795   15892 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1121 13:56:39.213710   15892 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 13:56:39.213726   15892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1121 13:56:39.225167   15892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 13:56:39.500668   15892 addons.go:495] Verifying addon gcp-auth=true in "addons-243127"
	I1121 13:56:39.501976   15892 out.go:179] * Verifying gcp-auth addon...
	I1121 13:56:39.503640   15892 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1121 13:56:39.505509   15892 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1121 13:56:39.505524   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:39.540132   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:39.568961   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:39.569056   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:40.006196   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:40.040977   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:40.068628   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:40.069779   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:40.506408   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:40.540053   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:40.568674   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:40.570033   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:40.764959   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:41.006293   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:41.040050   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:41.069111   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:41.069114   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:41.506637   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:41.540260   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:41.569361   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:41.569536   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:42.006699   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:42.040662   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:42.068394   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:42.069625   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:42.506871   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:42.540637   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:42.568462   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:42.569410   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:43.006704   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:43.040375   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:43.069308   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:43.069460   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:43.264367   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:43.506822   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:43.540630   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:43.568359   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:43.569459   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:44.006623   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:44.040438   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:44.068551   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:44.069682   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:44.507208   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:44.539979   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:44.568589   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:44.569786   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:45.005968   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:45.041164   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:45.069338   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:45.069451   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:45.265439   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:45.507492   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:45.540252   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:45.569084   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:45.569164   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:46.006761   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:46.040660   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:46.068415   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:46.069649   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:46.507019   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:46.540697   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:46.568412   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:46.569613   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:47.006619   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:47.040533   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:47.069550   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:47.069644   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:47.506621   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:47.540396   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:47.569618   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:47.569645   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 13:56:47.764461   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:48.006819   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:48.040603   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:48.068266   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:48.069409   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:48.506477   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:48.540029   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:48.568703   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:48.569924   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:49.006604   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:49.040495   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:49.068647   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:49.069660   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:49.506165   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:49.540993   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:49.568770   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:49.569821   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:49.764805   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:50.006166   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:50.040897   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:50.068617   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:50.069903   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:50.506672   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:50.540464   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:50.569360   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:50.569417   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:51.006629   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:51.040487   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:51.069323   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:51.069358   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:51.505942   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:51.540736   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:51.568462   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:51.569512   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:52.006811   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:52.040581   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:52.068290   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:52.069343   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:52.264463   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:52.506994   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:52.540626   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:52.568202   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:52.569455   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:53.006359   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:53.040138   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:53.068946   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:53.069039   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:53.506641   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:53.540253   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:53.569221   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:53.569471   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:54.006264   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:54.040116   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:54.069320   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:54.069368   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:54.265026   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:54.506815   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:54.540468   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:54.569312   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:54.569513   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:55.006486   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:55.040199   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:55.069340   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:55.069518   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:55.506169   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:55.539903   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:55.568622   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:55.569733   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:56.006106   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:56.040882   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:56.068713   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:56.069678   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:56.506326   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:56.540062   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:56.569029   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:56.569102   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:56.765153   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:57.006633   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:57.040523   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:57.069610   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:57.069646   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:57.505917   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:57.540876   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:57.568779   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:57.569936   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:58.006278   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:58.040043   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:58.068923   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:58.068967   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:58.506451   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:58.540195   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:58.569184   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:58.569339   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:59.006360   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:59.040083   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:59.069206   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:56:59.069319   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:56:59.265264   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:56:59.506497   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:56:59.540312   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:56:59.569320   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:56:59.569378   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:00.006202   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:00.039851   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:00.068729   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:00.069846   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:00.506266   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:00.540045   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:00.569046   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:00.569056   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:01.006120   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:01.039879   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:01.068680   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:01.069744   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:01.506202   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:01.539950   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:01.568729   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:01.572713   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:57:01.764891   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:57:02.006324   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:02.040288   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:02.069497   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:02.069609   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:02.505725   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:02.540575   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:02.569503   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:02.569615   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:03.006312   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:03.039905   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:03.068679   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:03.069924   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:03.506735   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:03.540475   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:03.569337   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:03.569547   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:04.007118   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:04.040899   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:04.069006   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:04.069147   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:57:04.264987   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:57:04.506578   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:04.540283   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:04.569208   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:04.569282   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:05.006295   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:05.040065   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:05.069063   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:05.069104   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:05.506782   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:05.540550   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:05.568322   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:05.569380   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:06.006188   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:06.040025   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:06.068971   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:06.069090   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:57:06.265289   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:57:06.506642   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:06.540485   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:06.569438   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:06.569536   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:07.006955   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:07.040744   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:07.068753   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:07.069839   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:07.506546   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:07.540504   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:07.568340   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:07.569441   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:08.006650   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:08.040525   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:08.069685   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:08.069761   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:08.506122   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:08.540973   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:08.568614   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:08.569924   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:57:08.764843   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:57:09.005963   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:09.040738   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:09.068944   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:09.069981   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:09.506418   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:09.540287   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:09.569186   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:09.569235   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:10.006523   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:10.040417   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:10.069466   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:10.069652   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:10.505895   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:10.540679   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:10.568475   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:10.569727   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 13:57:10.765389   15892 node_ready.go:57] node "addons-243127" has "Ready":"False" status (will retry)
	I1121 13:57:11.005747   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:11.040536   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:11.069408   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:11.069574   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:11.506295   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:11.540206   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:11.569265   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:11.569370   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:12.008711   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:12.042353   15892 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1121 13:57:12.042379   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:12.073706   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:12.073770   15892 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1121 13:57:12.073791   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:12.264343   15892 node_ready.go:49] node "addons-243127" is "Ready"
	I1121 13:57:12.264367   15892 node_ready.go:38] duration metric: took 40.502022718s for node "addons-243127" to be "Ready" ...
	I1121 13:57:12.264380   15892 api_server.go:52] waiting for apiserver process to appear ...
	I1121 13:57:12.264431   15892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 13:57:12.280931   15892 api_server.go:72] duration metric: took 41.053141386s to wait for apiserver process to appear ...
	I1121 13:57:12.280968   15892 api_server.go:88] waiting for apiserver healthz status ...
	I1121 13:57:12.280989   15892 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1121 13:57:12.285197   15892 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1121 13:57:12.285936   15892 api_server.go:141] control plane version: v1.34.1
	I1121 13:57:12.285958   15892 api_server.go:131] duration metric: took 4.982226ms to wait for apiserver health ...
	I1121 13:57:12.285968   15892 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 13:57:12.289161   15892 system_pods.go:59] 20 kube-system pods found
	I1121 13:57:12.289189   15892 system_pods.go:61] "amd-gpu-device-plugin-rs4wk" [d044fde9-5989-433c-bea4-d92a04c49500] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1121 13:57:12.289196   15892 system_pods.go:61] "coredns-66bc5c9577-4zrd8" [d3c3cb4a-fb2e-4e66-bfcc-1627a5fd1398] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 13:57:12.289203   15892 system_pods.go:61] "csi-hostpath-attacher-0" [c3dafc99-516f-4a8e-b4f7-d89c25df4961] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:57:12.289210   15892 system_pods.go:61] "csi-hostpath-resizer-0" [e9ccb693-950c-4e61-9db5-c3b02b9c5ebb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:57:12.289219   15892 system_pods.go:61] "csi-hostpathplugin-4xdqt" [8963c4d1-c27f-4a22-8820-2ed2b0176b81] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 13:57:12.289223   15892 system_pods.go:61] "etcd-addons-243127" [48c86679-5869-44a5-8f63-9587dd40dc0c] Running
	I1121 13:57:12.289227   15892 system_pods.go:61] "kindnet-ftx9v" [c1512147-d6dc-4d1b-bc29-edeeb1276825] Running
	I1121 13:57:12.289230   15892 system_pods.go:61] "kube-apiserver-addons-243127" [b9fbd0f1-07a6-44e0-87d9-73871a7270d2] Running
	I1121 13:57:12.289235   15892 system_pods.go:61] "kube-controller-manager-addons-243127" [399be348-dc97-47a2-8417-6cfe4bfd8119] Running
	I1121 13:57:12.289241   15892 system_pods.go:61] "kube-ingress-dns-minikube" [3a1fd53e-57ea-47dc-ae8a-b853499a67b7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 13:57:12.289248   15892 system_pods.go:61] "kube-proxy-jjn5n" [855bd4fa-48bd-4288-9b4b-7672fea98a04] Running
	I1121 13:57:12.289252   15892 system_pods.go:61] "kube-scheduler-addons-243127" [d7e53232-7ae8-4cbe-8e6f-35e9c89a5144] Running
	I1121 13:57:12.289257   15892 system_pods.go:61] "metrics-server-85b7d694d7-4khd6" [9d42569c-8cf8-439e-858c-1acf1f059214] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 13:57:12.289265   15892 system_pods.go:61] "nvidia-device-plugin-daemonset-v2h2s" [915d9baa-5e34-4320-9fe6-d65726ad8bb0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 13:57:12.289270   15892 system_pods.go:61] "registry-6b586f9694-2dn55" [01d1fb94-7e93-4c68-b4a5-4a7aec2eeffb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 13:57:12.289278   15892 system_pods.go:61] "registry-creds-764b6fb674-w4fpg" [bf5b16be-3fde-465c-8a46-1b7fccb15f4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:57:12.289283   15892 system_pods.go:61] "registry-proxy-9k9gw" [7e745192-f6fe-4677-b1f9-90e0ca68e72e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 13:57:12.289289   15892 system_pods.go:61] "snapshot-controller-7d9fbc56b8-l5nct" [0f555ce4-a222-47f4-b3d7-f1f1d7e80012] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:12.289295   15892 system_pods.go:61] "snapshot-controller-7d9fbc56b8-mw8mv" [cbe59326-60f1-4141-9c9a-e2a1976c98d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:12.289303   15892 system_pods.go:61] "storage-provisioner" [30c258b4-4d04-4c2b-8635-5c9fadbed185] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 13:57:12.289308   15892 system_pods.go:74] duration metric: took 3.334503ms to wait for pod list to return data ...
	I1121 13:57:12.289318   15892 default_sa.go:34] waiting for default service account to be created ...
	I1121 13:57:12.290938   15892 default_sa.go:45] found service account: "default"
	I1121 13:57:12.290954   15892 default_sa.go:55] duration metric: took 1.632314ms for default service account to be created ...
	I1121 13:57:12.290961   15892 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 13:57:12.293632   15892 system_pods.go:86] 20 kube-system pods found
	I1121 13:57:12.293653   15892 system_pods.go:89] "amd-gpu-device-plugin-rs4wk" [d044fde9-5989-433c-bea4-d92a04c49500] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1121 13:57:12.293660   15892 system_pods.go:89] "coredns-66bc5c9577-4zrd8" [d3c3cb4a-fb2e-4e66-bfcc-1627a5fd1398] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 13:57:12.293667   15892 system_pods.go:89] "csi-hostpath-attacher-0" [c3dafc99-516f-4a8e-b4f7-d89c25df4961] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:57:12.293672   15892 system_pods.go:89] "csi-hostpath-resizer-0" [e9ccb693-950c-4e61-9db5-c3b02b9c5ebb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:57:12.293678   15892 system_pods.go:89] "csi-hostpathplugin-4xdqt" [8963c4d1-c27f-4a22-8820-2ed2b0176b81] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 13:57:12.293683   15892 system_pods.go:89] "etcd-addons-243127" [48c86679-5869-44a5-8f63-9587dd40dc0c] Running
	I1121 13:57:12.293687   15892 system_pods.go:89] "kindnet-ftx9v" [c1512147-d6dc-4d1b-bc29-edeeb1276825] Running
	I1121 13:57:12.293691   15892 system_pods.go:89] "kube-apiserver-addons-243127" [b9fbd0f1-07a6-44e0-87d9-73871a7270d2] Running
	I1121 13:57:12.293694   15892 system_pods.go:89] "kube-controller-manager-addons-243127" [399be348-dc97-47a2-8417-6cfe4bfd8119] Running
	I1121 13:57:12.293703   15892 system_pods.go:89] "kube-ingress-dns-minikube" [3a1fd53e-57ea-47dc-ae8a-b853499a67b7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 13:57:12.293707   15892 system_pods.go:89] "kube-proxy-jjn5n" [855bd4fa-48bd-4288-9b4b-7672fea98a04] Running
	I1121 13:57:12.293711   15892 system_pods.go:89] "kube-scheduler-addons-243127" [d7e53232-7ae8-4cbe-8e6f-35e9c89a5144] Running
	I1121 13:57:12.293715   15892 system_pods.go:89] "metrics-server-85b7d694d7-4khd6" [9d42569c-8cf8-439e-858c-1acf1f059214] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 13:57:12.293722   15892 system_pods.go:89] "nvidia-device-plugin-daemonset-v2h2s" [915d9baa-5e34-4320-9fe6-d65726ad8bb0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 13:57:12.293727   15892 system_pods.go:89] "registry-6b586f9694-2dn55" [01d1fb94-7e93-4c68-b4a5-4a7aec2eeffb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 13:57:12.293735   15892 system_pods.go:89] "registry-creds-764b6fb674-w4fpg" [bf5b16be-3fde-465c-8a46-1b7fccb15f4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:57:12.293741   15892 system_pods.go:89] "registry-proxy-9k9gw" [7e745192-f6fe-4677-b1f9-90e0ca68e72e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 13:57:12.293747   15892 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l5nct" [0f555ce4-a222-47f4-b3d7-f1f1d7e80012] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:12.293754   15892 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mw8mv" [cbe59326-60f1-4141-9c9a-e2a1976c98d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:12.293759   15892 system_pods.go:89] "storage-provisioner" [30c258b4-4d04-4c2b-8635-5c9fadbed185] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 13:57:12.293776   15892 retry.go:31] will retry after 203.129367ms: missing components: kube-dns
	I1121 13:57:12.500221   15892 system_pods.go:86] 20 kube-system pods found
	I1121 13:57:12.500251   15892 system_pods.go:89] "amd-gpu-device-plugin-rs4wk" [d044fde9-5989-433c-bea4-d92a04c49500] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1121 13:57:12.500260   15892 system_pods.go:89] "coredns-66bc5c9577-4zrd8" [d3c3cb4a-fb2e-4e66-bfcc-1627a5fd1398] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 13:57:12.500266   15892 system_pods.go:89] "csi-hostpath-attacher-0" [c3dafc99-516f-4a8e-b4f7-d89c25df4961] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:57:12.500272   15892 system_pods.go:89] "csi-hostpath-resizer-0" [e9ccb693-950c-4e61-9db5-c3b02b9c5ebb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:57:12.500277   15892 system_pods.go:89] "csi-hostpathplugin-4xdqt" [8963c4d1-c27f-4a22-8820-2ed2b0176b81] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 13:57:12.500285   15892 system_pods.go:89] "etcd-addons-243127" [48c86679-5869-44a5-8f63-9587dd40dc0c] Running
	I1121 13:57:12.500289   15892 system_pods.go:89] "kindnet-ftx9v" [c1512147-d6dc-4d1b-bc29-edeeb1276825] Running
	I1121 13:57:12.500292   15892 system_pods.go:89] "kube-apiserver-addons-243127" [b9fbd0f1-07a6-44e0-87d9-73871a7270d2] Running
	I1121 13:57:12.500295   15892 system_pods.go:89] "kube-controller-manager-addons-243127" [399be348-dc97-47a2-8417-6cfe4bfd8119] Running
	I1121 13:57:12.500301   15892 system_pods.go:89] "kube-ingress-dns-minikube" [3a1fd53e-57ea-47dc-ae8a-b853499a67b7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 13:57:12.500305   15892 system_pods.go:89] "kube-proxy-jjn5n" [855bd4fa-48bd-4288-9b4b-7672fea98a04] Running
	I1121 13:57:12.500310   15892 system_pods.go:89] "kube-scheduler-addons-243127" [d7e53232-7ae8-4cbe-8e6f-35e9c89a5144] Running
	I1121 13:57:12.500317   15892 system_pods.go:89] "metrics-server-85b7d694d7-4khd6" [9d42569c-8cf8-439e-858c-1acf1f059214] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 13:57:12.500327   15892 system_pods.go:89] "nvidia-device-plugin-daemonset-v2h2s" [915d9baa-5e34-4320-9fe6-d65726ad8bb0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 13:57:12.500335   15892 system_pods.go:89] "registry-6b586f9694-2dn55" [01d1fb94-7e93-4c68-b4a5-4a7aec2eeffb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 13:57:12.500340   15892 system_pods.go:89] "registry-creds-764b6fb674-w4fpg" [bf5b16be-3fde-465c-8a46-1b7fccb15f4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:57:12.500348   15892 system_pods.go:89] "registry-proxy-9k9gw" [7e745192-f6fe-4677-b1f9-90e0ca68e72e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 13:57:12.500353   15892 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l5nct" [0f555ce4-a222-47f4-b3d7-f1f1d7e80012] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:12.500358   15892 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mw8mv" [cbe59326-60f1-4141-9c9a-e2a1976c98d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:12.500363   15892 system_pods.go:89] "storage-provisioner" [30c258b4-4d04-4c2b-8635-5c9fadbed185] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 13:57:12.500380   15892 retry.go:31] will retry after 307.17282ms: missing components: kube-dns
	I1121 13:57:12.505358   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:12.599581   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:12.599584   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:12.599674   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:12.811975   15892 system_pods.go:86] 20 kube-system pods found
	I1121 13:57:12.812004   15892 system_pods.go:89] "amd-gpu-device-plugin-rs4wk" [d044fde9-5989-433c-bea4-d92a04c49500] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1121 13:57:12.812012   15892 system_pods.go:89] "coredns-66bc5c9577-4zrd8" [d3c3cb4a-fb2e-4e66-bfcc-1627a5fd1398] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 13:57:12.812018   15892 system_pods.go:89] "csi-hostpath-attacher-0" [c3dafc99-516f-4a8e-b4f7-d89c25df4961] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:57:12.812025   15892 system_pods.go:89] "csi-hostpath-resizer-0" [e9ccb693-950c-4e61-9db5-c3b02b9c5ebb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:57:12.812030   15892 system_pods.go:89] "csi-hostpathplugin-4xdqt" [8963c4d1-c27f-4a22-8820-2ed2b0176b81] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 13:57:12.812033   15892 system_pods.go:89] "etcd-addons-243127" [48c86679-5869-44a5-8f63-9587dd40dc0c] Running
	I1121 13:57:12.812037   15892 system_pods.go:89] "kindnet-ftx9v" [c1512147-d6dc-4d1b-bc29-edeeb1276825] Running
	I1121 13:57:12.812040   15892 system_pods.go:89] "kube-apiserver-addons-243127" [b9fbd0f1-07a6-44e0-87d9-73871a7270d2] Running
	I1121 13:57:12.812044   15892 system_pods.go:89] "kube-controller-manager-addons-243127" [399be348-dc97-47a2-8417-6cfe4bfd8119] Running
	I1121 13:57:12.812058   15892 system_pods.go:89] "kube-ingress-dns-minikube" [3a1fd53e-57ea-47dc-ae8a-b853499a67b7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 13:57:12.812065   15892 system_pods.go:89] "kube-proxy-jjn5n" [855bd4fa-48bd-4288-9b4b-7672fea98a04] Running
	I1121 13:57:12.812069   15892 system_pods.go:89] "kube-scheduler-addons-243127" [d7e53232-7ae8-4cbe-8e6f-35e9c89a5144] Running
	I1121 13:57:12.812073   15892 system_pods.go:89] "metrics-server-85b7d694d7-4khd6" [9d42569c-8cf8-439e-858c-1acf1f059214] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 13:57:12.812080   15892 system_pods.go:89] "nvidia-device-plugin-daemonset-v2h2s" [915d9baa-5e34-4320-9fe6-d65726ad8bb0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 13:57:12.812088   15892 system_pods.go:89] "registry-6b586f9694-2dn55" [01d1fb94-7e93-4c68-b4a5-4a7aec2eeffb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 13:57:12.812093   15892 system_pods.go:89] "registry-creds-764b6fb674-w4fpg" [bf5b16be-3fde-465c-8a46-1b7fccb15f4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:57:12.812100   15892 system_pods.go:89] "registry-proxy-9k9gw" [7e745192-f6fe-4677-b1f9-90e0ca68e72e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 13:57:12.812105   15892 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l5nct" [0f555ce4-a222-47f4-b3d7-f1f1d7e80012] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:12.812114   15892 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mw8mv" [cbe59326-60f1-4141-9c9a-e2a1976c98d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:12.812119   15892 system_pods.go:89] "storage-provisioner" [30c258b4-4d04-4c2b-8635-5c9fadbed185] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 13:57:12.812134   15892 retry.go:31] will retry after 410.270718ms: missing components: kube-dns
	I1121 13:57:13.006959   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:13.041234   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:13.069635   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:13.069662   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:13.229236   15892 system_pods.go:86] 20 kube-system pods found
	I1121 13:57:13.229274   15892 system_pods.go:89] "amd-gpu-device-plugin-rs4wk" [d044fde9-5989-433c-bea4-d92a04c49500] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1121 13:57:13.229284   15892 system_pods.go:89] "coredns-66bc5c9577-4zrd8" [d3c3cb4a-fb2e-4e66-bfcc-1627a5fd1398] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 13:57:13.229295   15892 system_pods.go:89] "csi-hostpath-attacher-0" [c3dafc99-516f-4a8e-b4f7-d89c25df4961] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:57:13.229303   15892 system_pods.go:89] "csi-hostpath-resizer-0" [e9ccb693-950c-4e61-9db5-c3b02b9c5ebb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:57:13.229312   15892 system_pods.go:89] "csi-hostpathplugin-4xdqt" [8963c4d1-c27f-4a22-8820-2ed2b0176b81] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 13:57:13.229318   15892 system_pods.go:89] "etcd-addons-243127" [48c86679-5869-44a5-8f63-9587dd40dc0c] Running
	I1121 13:57:13.229324   15892 system_pods.go:89] "kindnet-ftx9v" [c1512147-d6dc-4d1b-bc29-edeeb1276825] Running
	I1121 13:57:13.229329   15892 system_pods.go:89] "kube-apiserver-addons-243127" [b9fbd0f1-07a6-44e0-87d9-73871a7270d2] Running
	I1121 13:57:13.229335   15892 system_pods.go:89] "kube-controller-manager-addons-243127" [399be348-dc97-47a2-8417-6cfe4bfd8119] Running
	I1121 13:57:13.229345   15892 system_pods.go:89] "kube-ingress-dns-minikube" [3a1fd53e-57ea-47dc-ae8a-b853499a67b7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 13:57:13.229350   15892 system_pods.go:89] "kube-proxy-jjn5n" [855bd4fa-48bd-4288-9b4b-7672fea98a04] Running
	I1121 13:57:13.229355   15892 system_pods.go:89] "kube-scheduler-addons-243127" [d7e53232-7ae8-4cbe-8e6f-35e9c89a5144] Running
	I1121 13:57:13.229362   15892 system_pods.go:89] "metrics-server-85b7d694d7-4khd6" [9d42569c-8cf8-439e-858c-1acf1f059214] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 13:57:13.229370   15892 system_pods.go:89] "nvidia-device-plugin-daemonset-v2h2s" [915d9baa-5e34-4320-9fe6-d65726ad8bb0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 13:57:13.229378   15892 system_pods.go:89] "registry-6b586f9694-2dn55" [01d1fb94-7e93-4c68-b4a5-4a7aec2eeffb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 13:57:13.229395   15892 system_pods.go:89] "registry-creds-764b6fb674-w4fpg" [bf5b16be-3fde-465c-8a46-1b7fccb15f4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:57:13.229402   15892 system_pods.go:89] "registry-proxy-9k9gw" [7e745192-f6fe-4677-b1f9-90e0ca68e72e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 13:57:13.229410   15892 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l5nct" [0f555ce4-a222-47f4-b3d7-f1f1d7e80012] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:13.229419   15892 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mw8mv" [cbe59326-60f1-4141-9c9a-e2a1976c98d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:13.229424   15892 system_pods.go:89] "storage-provisioner" [30c258b4-4d04-4c2b-8635-5c9fadbed185] Running
	I1121 13:57:13.229441   15892 retry.go:31] will retry after 389.657899ms: missing components: kube-dns
	I1121 13:57:13.507378   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:13.541866   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:13.572015   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:13.572221   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:13.624636   15892 system_pods.go:86] 20 kube-system pods found
	I1121 13:57:13.624667   15892 system_pods.go:89] "amd-gpu-device-plugin-rs4wk" [d044fde9-5989-433c-bea4-d92a04c49500] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1121 13:57:13.624675   15892 system_pods.go:89] "coredns-66bc5c9577-4zrd8" [d3c3cb4a-fb2e-4e66-bfcc-1627a5fd1398] Running
	I1121 13:57:13.624686   15892 system_pods.go:89] "csi-hostpath-attacher-0" [c3dafc99-516f-4a8e-b4f7-d89c25df4961] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 13:57:13.624694   15892 system_pods.go:89] "csi-hostpath-resizer-0" [e9ccb693-950c-4e61-9db5-c3b02b9c5ebb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 13:57:13.624703   15892 system_pods.go:89] "csi-hostpathplugin-4xdqt" [8963c4d1-c27f-4a22-8820-2ed2b0176b81] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 13:57:13.624719   15892 system_pods.go:89] "etcd-addons-243127" [48c86679-5869-44a5-8f63-9587dd40dc0c] Running
	I1121 13:57:13.624725   15892 system_pods.go:89] "kindnet-ftx9v" [c1512147-d6dc-4d1b-bc29-edeeb1276825] Running
	I1121 13:57:13.624730   15892 system_pods.go:89] "kube-apiserver-addons-243127" [b9fbd0f1-07a6-44e0-87d9-73871a7270d2] Running
	I1121 13:57:13.624735   15892 system_pods.go:89] "kube-controller-manager-addons-243127" [399be348-dc97-47a2-8417-6cfe4bfd8119] Running
	I1121 13:57:13.624743   15892 system_pods.go:89] "kube-ingress-dns-minikube" [3a1fd53e-57ea-47dc-ae8a-b853499a67b7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 13:57:13.624747   15892 system_pods.go:89] "kube-proxy-jjn5n" [855bd4fa-48bd-4288-9b4b-7672fea98a04] Running
	I1121 13:57:13.624753   15892 system_pods.go:89] "kube-scheduler-addons-243127" [d7e53232-7ae8-4cbe-8e6f-35e9c89a5144] Running
	I1121 13:57:13.624760   15892 system_pods.go:89] "metrics-server-85b7d694d7-4khd6" [9d42569c-8cf8-439e-858c-1acf1f059214] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 13:57:13.624768   15892 system_pods.go:89] "nvidia-device-plugin-daemonset-v2h2s" [915d9baa-5e34-4320-9fe6-d65726ad8bb0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 13:57:13.624778   15892 system_pods.go:89] "registry-6b586f9694-2dn55" [01d1fb94-7e93-4c68-b4a5-4a7aec2eeffb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 13:57:13.624785   15892 system_pods.go:89] "registry-creds-764b6fb674-w4fpg" [bf5b16be-3fde-465c-8a46-1b7fccb15f4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 13:57:13.624792   15892 system_pods.go:89] "registry-proxy-9k9gw" [7e745192-f6fe-4677-b1f9-90e0ca68e72e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 13:57:13.624803   15892 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l5nct" [0f555ce4-a222-47f4-b3d7-f1f1d7e80012] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:13.624812   15892 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mw8mv" [cbe59326-60f1-4141-9c9a-e2a1976c98d1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 13:57:13.624818   15892 system_pods.go:89] "storage-provisioner" [30c258b4-4d04-4c2b-8635-5c9fadbed185] Running
	I1121 13:57:13.624828   15892 system_pods.go:126] duration metric: took 1.333860578s to wait for k8s-apps to be running ...
	I1121 13:57:13.624837   15892 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 13:57:13.624886   15892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 13:57:13.644018   15892 system_svc.go:56] duration metric: took 19.174078ms WaitForService to wait for kubelet
	I1121 13:57:13.644045   15892 kubeadm.go:587] duration metric: took 42.416259147s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 13:57:13.644066   15892 node_conditions.go:102] verifying NodePressure condition ...
	I1121 13:57:13.647022   15892 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 13:57:13.647052   15892 node_conditions.go:123] node cpu capacity is 8
	I1121 13:57:13.647065   15892 node_conditions.go:105] duration metric: took 2.993925ms to run NodePressure ...
	I1121 13:57:13.647079   15892 start.go:242] waiting for startup goroutines ...
	I1121 13:57:14.007531   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:14.041195   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:14.069914   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:14.070129   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:14.507219   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:14.540409   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:14.570081   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:14.570246   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:15.007015   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:15.041356   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:15.070069   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:15.070157   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:15.506833   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:15.541264   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:15.569754   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:15.569777   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:16.007862   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:16.041344   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:16.070150   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:16.070277   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:16.507506   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:16.541165   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:16.569955   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:16.570111   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:17.006460   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:17.040507   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:17.069781   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:17.069833   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:17.506976   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:17.541409   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:17.569858   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:17.570000   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:18.007356   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:18.041967   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:18.070427   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:18.070441   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:18.507233   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:18.540713   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:18.569408   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:18.570156   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:19.007091   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:19.041684   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:19.069555   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:19.069936   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:19.507127   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:19.541706   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:19.570059   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:19.570413   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:20.007133   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:20.117216   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:20.117532   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:20.117543   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:20.506846   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:20.608069   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:20.608160   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:20.608196   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:21.006700   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:21.041087   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:21.070130   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:21.070215   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:21.507265   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:21.540789   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:21.570150   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:21.570345   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:22.006300   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:22.040180   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:22.069428   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:22.069551   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:22.507654   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:22.608249   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:22.608368   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:22.608394   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:23.007213   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:23.040512   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:23.070179   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:23.070200   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:23.507129   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:23.541456   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:23.570175   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:23.570386   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:24.006724   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:24.040447   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:24.068424   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:24.069685   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:24.507086   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:24.608273   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:24.608365   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:24.608597   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:25.006516   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:25.040379   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:25.069461   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:25.069639   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:25.506270   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:25.540946   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:25.569520   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:25.570050   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:26.007045   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:26.041611   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:26.069305   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:26.069813   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:26.507041   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:26.607931   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:26.607985   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:26.608161   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:27.006778   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:27.040782   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:27.069657   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:27.070156   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:27.507266   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:27.540321   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:27.608473   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:27.608525   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:28.007419   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:28.041094   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:28.069873   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:28.069877   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:28.506430   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:28.540283   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:28.608155   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:28.608169   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:29.006946   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:29.041415   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:29.070205   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:29.070399   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:29.507263   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:29.540300   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:29.569880   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:29.570125   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:30.006231   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:30.040716   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:30.069378   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:30.069890   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:30.506352   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:30.540255   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:30.569675   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:30.569732   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:31.006405   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:31.040968   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:31.069858   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:31.070432   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:31.540742   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:31.541346   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:31.643471   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:31.643544   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:32.006046   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:32.041290   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:32.070253   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:32.070584   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:32.507494   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:32.541245   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:32.608702   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:32.608848   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:33.006332   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:33.040369   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:33.069422   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:33.069494   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:33.506883   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:33.540586   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:33.569795   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:33.569834   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:34.005857   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:34.040852   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:34.069222   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:34.070164   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:34.507177   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:34.541404   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:34.569679   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:34.571006   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:35.006757   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:35.040995   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:35.069778   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:35.070210   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:35.507430   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:35.540697   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:35.568809   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:35.569771   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:36.006625   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:36.041077   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:36.069864   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:36.069901   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:36.506641   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:36.540774   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:36.569158   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:36.570158   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:37.006662   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:37.040495   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:37.069234   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:37.069472   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:37.507677   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:37.542023   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:37.570061   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:37.570262   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:38.007216   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:38.040614   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 13:57:38.068979   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:38.069772   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:38.506266   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:38.540042   15892 kapi.go:107] duration metric: took 1m6.00195971s to wait for kubernetes.io/minikube-addons=registry ...
	I1121 13:57:38.570550   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:38.570737   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:39.007599   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:39.071404   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:39.071606   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:39.506930   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:39.569379   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:39.570130   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:40.007228   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:40.070018   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:40.070115   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:40.507081   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:40.570150   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:40.570154   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:41.007427   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:41.070753   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:41.070915   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:41.506341   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:41.569422   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:41.569683   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:42.007490   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:42.070306   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:42.070336   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:42.507109   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:42.569729   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:42.569821   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:43.006108   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:43.070026   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:43.070040   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:43.506938   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:43.569901   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:43.570305   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:44.006902   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:44.107513   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:44.107555   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:44.506238   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:44.569675   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:44.569757   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:45.006325   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:45.069751   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:45.069822   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:45.536341   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:45.569925   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:45.570030   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:46.007992   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:46.069960   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 13:57:46.070446   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:46.507764   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:46.569388   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:46.569480   15892 kapi.go:107] duration metric: took 1m13.502997909s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1121 13:57:47.008789   15892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 13:57:47.109632   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:47.507937   15892 kapi.go:107] duration metric: took 1m8.004292079s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1121 13:57:47.509735   15892 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-243127 cluster.
	I1121 13:57:47.511518   15892 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1121 13:57:47.512755   15892 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1121 13:57:47.571759   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:48.070657   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:48.571256   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:49.070790   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:49.571394   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:50.070216   15892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 13:57:50.569617   15892 kapi.go:107] duration metric: took 1m17.502411321s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1121 13:57:50.570986   15892 out.go:179] * Enabled addons: amd-gpu-device-plugin, registry-creds, storage-provisioner, inspektor-gadget, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, storage-provisioner-rancher, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1121 13:57:50.572035   15892 addons.go:530] duration metric: took 1m19.344223281s for enable addons: enabled=[amd-gpu-device-plugin registry-creds storage-provisioner inspektor-gadget nvidia-device-plugin cloud-spanner ingress-dns metrics-server storage-provisioner-rancher yakd default-storageclass volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1121 13:57:50.572070   15892 start.go:247] waiting for cluster config update ...
	I1121 13:57:50.572088   15892 start.go:256] writing updated cluster config ...
	I1121 13:57:50.572299   15892 ssh_runner.go:195] Run: rm -f paused
	I1121 13:57:50.575859   15892 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 13:57:50.578380   15892 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4zrd8" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:50.581628   15892 pod_ready.go:94] pod "coredns-66bc5c9577-4zrd8" is "Ready"
	I1121 13:57:50.581646   15892 pod_ready.go:86] duration metric: took 3.247968ms for pod "coredns-66bc5c9577-4zrd8" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:50.583155   15892 pod_ready.go:83] waiting for pod "etcd-addons-243127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:50.586203   15892 pod_ready.go:94] pod "etcd-addons-243127" is "Ready"
	I1121 13:57:50.586219   15892 pod_ready.go:86] duration metric: took 3.049799ms for pod "etcd-addons-243127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:50.587673   15892 pod_ready.go:83] waiting for pod "kube-apiserver-addons-243127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:50.590771   15892 pod_ready.go:94] pod "kube-apiserver-addons-243127" is "Ready"
	I1121 13:57:50.590791   15892 pod_ready.go:86] duration metric: took 3.100408ms for pod "kube-apiserver-addons-243127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:50.592116   15892 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-243127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:50.979548   15892 pod_ready.go:94] pod "kube-controller-manager-addons-243127" is "Ready"
	I1121 13:57:50.979587   15892 pod_ready.go:86] duration metric: took 387.452087ms for pod "kube-controller-manager-addons-243127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:51.179410   15892 pod_ready.go:83] waiting for pod "kube-proxy-jjn5n" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:51.585535   15892 pod_ready.go:94] pod "kube-proxy-jjn5n" is "Ready"
	I1121 13:57:51.585610   15892 pod_ready.go:86] duration metric: took 406.173308ms for pod "kube-proxy-jjn5n" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:51.779110   15892 pod_ready.go:83] waiting for pod "kube-scheduler-addons-243127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:52.179087   15892 pod_ready.go:94] pod "kube-scheduler-addons-243127" is "Ready"
	I1121 13:57:52.179112   15892 pod_ready.go:86] duration metric: took 399.979903ms for pod "kube-scheduler-addons-243127" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 13:57:52.179122   15892 pod_ready.go:40] duration metric: took 1.603241395s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 13:57:52.221722   15892 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 13:57:52.224392   15892 out.go:179] * Done! kubectl is now configured to use "addons-243127" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 13:57:51 addons-243127 crio[766]: time="2025-11-21T13:57:51.440756744Z" level=info msg="Deleting pod gcp-auth_gcp-auth-certs-patch-7bgl5 from CNI network \"kindnet\" (type=ptp)"
	Nov 21 13:57:51 addons-243127 crio[766]: time="2025-11-21T13:57:51.456996753Z" level=info msg="Stopped pod sandbox: 85486c55d7e1607de14dce97826b8cd98a74c6de4db0ec636a3f43fb75164cdf" id=77b7dab2-1619-4488-a77f-63ff70efbcf3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.029215615Z" level=info msg="Running pod sandbox: default/busybox/POD" id=aefcb890-559f-4c1b-bb24-0f8894633e34 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.029283477Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.035478732Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a6471a4f996f0ca7d105e3273257e5c95c925cdcd1b038eba96e704810b66fbf UID:5ae2154d-5830-41e7-a8ff-ead5aef66f5c NetNS:/var/run/netns/5907954b-bf94-4a00-b41f-053a9d4228f2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000594f68}] Aliases:map[]}"
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.035512742Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.044496662Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a6471a4f996f0ca7d105e3273257e5c95c925cdcd1b038eba96e704810b66fbf UID:5ae2154d-5830-41e7-a8ff-ead5aef66f5c NetNS:/var/run/netns/5907954b-bf94-4a00-b41f-053a9d4228f2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000594f68}] Aliases:map[]}"
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.044657975Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.045384988Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.046178272Z" level=info msg="Ran pod sandbox a6471a4f996f0ca7d105e3273257e5c95c925cdcd1b038eba96e704810b66fbf with infra container: default/busybox/POD" id=aefcb890-559f-4c1b-bb24-0f8894633e34 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.047225411Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=70ebf1d8-ce61-4f54-a075-c68a93835ebb name=/runtime.v1.ImageService/ImageStatus
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.047369847Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=70ebf1d8-ce61-4f54-a075-c68a93835ebb name=/runtime.v1.ImageService/ImageStatus
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.047412217Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=70ebf1d8-ce61-4f54-a075-c68a93835ebb name=/runtime.v1.ImageService/ImageStatus
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.047953597Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2e029c5a-a741-446b-a57a-0e5080abb753 name=/runtime.v1.ImageService/PullImage
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.049374115Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.622299826Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=2e029c5a-a741-446b-a57a-0e5080abb753 name=/runtime.v1.ImageService/PullImage
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.622843939Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=54efa512-ae06-4083-984b-40857fab2656 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.624199209Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0820e67e-45fe-4765-afb2-7c57b57f97a7 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.627270199Z" level=info msg="Creating container: default/busybox/busybox" id=977261f9-aab8-49bf-8e92-3c6cb92e0f0c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.627368572Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.632530818Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.632947071Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.662744527Z" level=info msg="Created container 01ebaf69022369398421d30991136f62f37807000303a38d00cc8d117ad32d1e: default/busybox/busybox" id=977261f9-aab8-49bf-8e92-3c6cb92e0f0c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.663241222Z" level=info msg="Starting container: 01ebaf69022369398421d30991136f62f37807000303a38d00cc8d117ad32d1e" id=ba13e8a3-8bb0-477f-a322-68c752435efa name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 13:57:53 addons-243127 crio[766]: time="2025-11-21T13:57:53.66494184Z" level=info msg="Started container" PID=6331 containerID=01ebaf69022369398421d30991136f62f37807000303a38d00cc8d117ad32d1e description=default/busybox/busybox id=ba13e8a3-8bb0-477f-a322-68c752435efa name=/runtime.v1.RuntimeService/StartContainer sandboxID=a6471a4f996f0ca7d105e3273257e5c95c925cdcd1b038eba96e704810b66fbf
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	01ebaf6902236       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   a6471a4f996f0       busybox                                    default
	ccc5c287b598d       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             12 seconds ago       Running             controller                               0                   d9050b5400ecb       ingress-nginx-controller-6c8bf45fb-ztr8c   ingress-nginx
	2b0fe8267dc04       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             12 seconds ago       Exited              patch                                    2                   85486c55d7e16       gcp-auth-certs-patch-7bgl5                 gcp-auth
	9dfbb58800a98       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 15 seconds ago       Running             gcp-auth                                 0                   cf6b4c497d512       gcp-auth-78565c9fb4-996c5                  gcp-auth
	024fd155c71f4       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          16 seconds ago       Running             csi-snapshotter                          0                   7f0f3547e5b3f       csi-hostpathplugin-4xdqt                   kube-system
	ea832cf9137b5       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             16 seconds ago       Exited              patch                                    2                   afa676fb164d3       ingress-nginx-admission-patch-skmg2        ingress-nginx
	a45184ba10995       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          18 seconds ago       Running             csi-provisioner                          0                   7f0f3547e5b3f       csi-hostpathplugin-4xdqt                   kube-system
	c7cea569e79d9       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            19 seconds ago       Running             liveness-probe                           0                   7f0f3547e5b3f       csi-hostpathplugin-4xdqt                   kube-system
	f87ee9ca1eb0d       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           20 seconds ago       Running             hostpath                                 0                   7f0f3547e5b3f       csi-hostpathplugin-4xdqt                   kube-system
	b69dec080641a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                21 seconds ago       Running             node-driver-registrar                    0                   7f0f3547e5b3f       csi-hostpathplugin-4xdqt                   kube-system
	9298da412a875       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            21 seconds ago       Running             gadget                                   0                   b0b9918aa76e1       gadget-hm6p2                               gadget
	733ab2d4f270d       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              24 seconds ago       Running             registry-proxy                           0                   8c9a9a16b0b64       registry-proxy-9k9gw                       kube-system
	6d89596515e60       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   25 seconds ago       Running             csi-external-health-monitor-controller   0                   7f0f3547e5b3f       csi-hostpathplugin-4xdqt                   kube-system
	12a7ad65b01f3       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     26 seconds ago       Running             nvidia-device-plugin-ctr                 0                   d82c842fe855b       nvidia-device-plugin-daemonset-v2h2s       kube-system
	fe44db038e23b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   28 seconds ago       Exited              create                                   0                   ff3f053b64982       gcp-auth-certs-create-s29b9                gcp-auth
	d4c89f2bb2211       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   29 seconds ago       Exited              create                                   0                   788e7d5e412cb       ingress-nginx-admission-create-l9vg2       ingress-nginx
	3179500ac7719       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      29 seconds ago       Running             volume-snapshot-controller               0                   1452bb4294121       snapshot-controller-7d9fbc56b8-l5nct       kube-system
	994774c9ca4f6       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             29 seconds ago       Running             csi-attacher                             0                   21bd0b917b208       csi-hostpath-attacher-0                    kube-system
	32c8ebd634641       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     30 seconds ago       Running             amd-gpu-device-plugin                    0                   bb211bfd6fb89       amd-gpu-device-plugin-rs4wk                kube-system
	56057aee31072       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        32 seconds ago       Running             metrics-server                           0                   61805deadcee8       metrics-server-85b7d694d7-4khd6            kube-system
	77eb40d30250c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      33 seconds ago       Running             volume-snapshot-controller               0                   f678007ed80fe       snapshot-controller-7d9fbc56b8-mw8mv       kube-system
	7d1e97c795b00       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              34 seconds ago       Running             csi-resizer                              0                   70f920cc3b876       csi-hostpath-resizer-0                     kube-system
	e40f4e5a20c27       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               35 seconds ago       Running             cloud-spanner-emulator                   0                   5b463692184c5       cloud-spanner-emulator-6f9fcf858b-mpbzj    default
	f49ed0d95068e       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             39 seconds ago       Running             local-path-provisioner                   0                   ef4c49a25966c       local-path-provisioner-648f6765c9-k4mfq    local-path-storage
	b3df341d90d52       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           40 seconds ago       Running             registry                                 0                   a6bfd5f6b61bc       registry-6b586f9694-2dn55                  kube-system
	545f7855f22e4       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              42 seconds ago       Running             yakd                                     0                   299ee5353cffc       yakd-dashboard-5ff678cb9-mwqnd             yakd-dashboard
	a862b6c84241d       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               45 seconds ago       Running             minikube-ingress-dns                     0                   b52ff90310e1c       kube-ingress-dns-minikube                  kube-system
	7d8b7a3c495d7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             50 seconds ago       Running             storage-provisioner                      0                   94761203a3a56       storage-provisioner                        kube-system
	8a5df4965546d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             50 seconds ago       Running             coredns                                  0                   27999ed8852c0       coredns-66bc5c9577-4zrd8                   kube-system
	7fb6fcbbcafef       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   2e3bd4f96a429       kube-proxy-jjn5n                           kube-system
	66f416a261181       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   e2c7255a2a076       kindnet-ftx9v                              kube-system
	b49596a0b2d4d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago   Running             kube-controller-manager                  0                   80505f572dcc4       kube-controller-manager-addons-243127      kube-system
	6bc0a23d21b59       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago   Running             kube-apiserver                           0                   bb614fdcf0ded       kube-apiserver-addons-243127               kube-system
	61ca322c06942       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago   Running             etcd                                     0                   cc8a92808ab8f       etcd-addons-243127                         kube-system
	19610d1d8120b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago   Running             kube-scheduler                           0                   b9bc571b017cd       kube-scheduler-addons-243127               kube-system
	
	
	==> coredns [8a5df4965546d743393a4b617fc40ffb5e887a5847cf860fc0a2bf8ca9d53262] <==
	[INFO] 10.244.0.11:52757 - 53488 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.004026893s
	[INFO] 10.244.0.11:35350 - 41319 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000067474s
	[INFO] 10.244.0.11:35350 - 41005 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000083762s
	[INFO] 10.244.0.11:38159 - 35633 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000063146s
	[INFO] 10.244.0.11:38159 - 35355 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000090945s
	[INFO] 10.244.0.11:55816 - 64404 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000052626s
	[INFO] 10.244.0.11:55816 - 64211 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000089399s
	[INFO] 10.244.0.11:50254 - 32401 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000091704s
	[INFO] 10.244.0.11:50254 - 32000 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000117434s
	[INFO] 10.244.0.22:35301 - 58450 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000160072s
	[INFO] 10.244.0.22:57265 - 23762 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000242119s
	[INFO] 10.244.0.22:37903 - 57917 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000116347s
	[INFO] 10.244.0.22:52535 - 50530 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000128436s
	[INFO] 10.244.0.22:35972 - 35670 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126666s
	[INFO] 10.244.0.22:41567 - 14817 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000200648s
	[INFO] 10.244.0.22:43717 - 6760 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.002616446s
	[INFO] 10.244.0.22:34173 - 4218 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.002650355s
	[INFO] 10.244.0.22:57876 - 35002 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.003959329s
	[INFO] 10.244.0.22:51798 - 16114 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004837409s
	[INFO] 10.244.0.22:41783 - 7225 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004018615s
	[INFO] 10.244.0.22:41699 - 16665 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005204606s
	[INFO] 10.244.0.22:49251 - 45049 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005487082s
	[INFO] 10.244.0.22:51551 - 34388 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006091221s
	[INFO] 10.244.0.22:56289 - 57052 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000994965s
	[INFO] 10.244.0.22:40674 - 42662 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001996899s
	
	
	==> describe nodes <==
	Name:               addons-243127
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-243127
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=addons-243127
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T13_56_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-243127
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-243127"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 13:56:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-243127
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 13:57:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 13:57:57 +0000   Fri, 21 Nov 2025 13:56:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 13:57:57 +0000   Fri, 21 Nov 2025 13:56:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 13:57:57 +0000   Fri, 21 Nov 2025 13:56:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 13:57:57 +0000   Fri, 21 Nov 2025 13:57:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-243127
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                93b2b064-c0a3-4ff3-b97c-0aadda05f1d2
	  Boot ID:                    4deed74e-d1ae-403f-b303-7338c681df31
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-6f9fcf858b-mpbzj     0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  gadget                      gadget-hm6p2                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  gcp-auth                    gcp-auth-78565c9fb4-996c5                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-ztr8c    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         90s
	  kube-system                 amd-gpu-device-plugin-rs4wk                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 coredns-66bc5c9577-4zrd8                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     92s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 csi-hostpathplugin-4xdqt                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 etcd-addons-243127                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         97s
	  kube-system                 kindnet-ftx9v                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      92s
	  kube-system                 kube-apiserver-addons-243127                250m (3%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-controller-manager-addons-243127       200m (2%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-proxy-jjn5n                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-scheduler-addons-243127                100m (1%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 metrics-server-85b7d694d7-4khd6             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         90s
	  kube-system                 nvidia-device-plugin-daemonset-v2h2s        0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 registry-6b586f9694-2dn55                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 registry-creds-764b6fb674-w4fpg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 registry-proxy-9k9gw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 snapshot-controller-7d9fbc56b8-l5nct        0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 snapshot-controller-7d9fbc56b8-mw8mv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  local-path-storage          local-path-provisioner-648f6765c9-k4mfq     0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-mwqnd              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 91s                  kube-proxy       
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s (x8 over 102s)  kubelet          Node addons-243127 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x8 over 102s)  kubelet          Node addons-243127 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x8 over 102s)  kubelet          Node addons-243127 status is now: NodeHasSufficientPID
	  Normal  Starting                 97s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  97s                  kubelet          Node addons-243127 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s                  kubelet          Node addons-243127 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s                  kubelet          Node addons-243127 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           93s                  node-controller  Node addons-243127 event: Registered Node addons-243127 in Controller
	  Normal  NodeReady                51s                  kubelet          Node addons-243127 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 13:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000998] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.368470] i8042: Warning: Keylock active
	[  +0.010492] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503643] block sda: the capability attribute has been deprecated.
	[  +0.087005] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024870] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.407014] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [61ca322c069424488425150fa2cb0785f8dae51cb585322a28bd81a85d0796f1] <==
	{"level":"warn","ts":"2025-11-21T13:56:22.244165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.252704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.258063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.264248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.269703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.275974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.282886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.289089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.294678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.300302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.305936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.311746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.317402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.331524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.337244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.343519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:22.387766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:33.547857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:59.753261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:59.760087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:59.774413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T13:56:59.780610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58540","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-21T13:57:19.926728Z","caller":"traceutil/trace.go:172","msg":"trace[1719754312] transaction","detail":"{read_only:false; response_revision:992; number_of_response:1; }","duration":"115.338233ms","start":"2025-11-21T13:57:19.811367Z","end":"2025-11-21T13:57:19.926705Z","steps":["trace[1719754312] 'process raft request'  (duration: 61.395326ms)","trace[1719754312] 'compare'  (duration: 53.879922ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-21T13:57:31.395119Z","caller":"traceutil/trace.go:172","msg":"trace[1058167776] transaction","detail":"{read_only:false; response_revision:1068; number_of_response:1; }","duration":"104.795927ms","start":"2025-11-21T13:57:31.290306Z","end":"2025-11-21T13:57:31.395102Z","steps":["trace[1058167776] 'process raft request'  (duration: 104.707018ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T13:57:31.539346Z","caller":"traceutil/trace.go:172","msg":"trace[2071703886] transaction","detail":"{read_only:false; response_revision:1073; number_of_response:1; }","duration":"128.476604ms","start":"2025-11-21T13:57:31.410850Z","end":"2025-11-21T13:57:31.539327Z","steps":["trace[2071703886] 'process raft request'  (duration: 85.698692ms)","trace[2071703886] 'compare'  (duration: 42.651001ms)"],"step_count":2}
	
	
	==> gcp-auth [9dfbb58800a985a6ac2bc0a3743b7e1c8a75e2fe19948e9397ed4fd00a7eb867] <==
	2025/11/21 13:57:46 GCP Auth Webhook started!
	2025/11/21 13:57:52 Ready to marshal response ...
	2025/11/21 13:57:52 Ready to write response ...
	2025/11/21 13:57:52 Ready to marshal response ...
	2025/11/21 13:57:52 Ready to write response ...
	2025/11/21 13:57:52 Ready to marshal response ...
	2025/11/21 13:57:52 Ready to write response ...
	
	
	==> kernel <==
	 13:58:02 up 40 min,  0 user,  load average: 2.99, 1.35, 0.51
	Linux addons-243127 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [66f416a2611818263d3d82ca7e1c1dc51f8f8373c1ef7e282ba527f4ea204081] <==
	I1121 13:56:31.184870       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T13:56:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 13:56:31.457729       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 13:56:31.460242       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 13:56:31.460426       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 13:56:31.461645       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 13:57:01.459059       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 13:57:01.459092       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 13:57:01.459281       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1121 13:57:01.461779       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1121 13:57:03.060869       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 13:57:03.060893       1 metrics.go:72] Registering metrics
	I1121 13:57:03.060962       1 controller.go:711] "Syncing nftables rules"
	I1121 13:57:11.459304       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:57:11.459358       1 main.go:301] handling current node
	I1121 13:57:21.455117       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:57:21.455180       1 main.go:301] handling current node
	I1121 13:57:31.454741       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:57:31.454772       1 main.go:301] handling current node
	I1121 13:57:41.454657       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:57:41.454695       1 main.go:301] handling current node
	I1121 13:57:51.454739       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:57:51.454775       1 main.go:301] handling current node
	I1121 13:58:01.455083       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 13:58:01.455197       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6bc0a23d21b59305e5afddf76ffb436c468bd915a8bb02d9d71592825dafd624] <==
	W1121 13:57:31.548823       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 13:57:31.548883       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1121 13:57:31.548877       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.72.73:443: connect: connection refused" logger="UnhandledError"
	E1121 13:57:31.550590       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.72.73:443: connect: connection refused" logger="UnhandledError"
	E1121 13:57:31.555978       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.72.73:443: connect: connection refused" logger="UnhandledError"
	E1121 13:57:31.577158       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.72.73:443: connect: connection refused" logger="UnhandledError"
	E1121 13:57:31.618202       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.72.73:443: connect: connection refused" logger="UnhandledError"
	E1121 13:57:31.699365       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.72.73:443: connect: connection refused" logger="UnhandledError"
	E1121 13:57:31.860143       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.72.73:443: connect: connection refused" logger="UnhandledError"
	E1121 13:57:32.181258       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.72.73:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.72.73:443: connect: connection refused" logger="UnhandledError"
	W1121 13:57:32.549117       1 handler_proxy.go:99] no RequestInfo found in the context
	W1121 13:57:32.549158       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 13:57:32.549172       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1121 13:57:32.549190       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1121 13:57:32.549229       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1121 13:57:32.550354       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1121 13:57:32.847176       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1121 13:58:00.862216       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51614: use of closed network connection
	E1121 13:58:00.997048       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51648: use of closed network connection
	
	
	==> kube-controller-manager [b49596a0b2d4dc21f077381ec50d10f903e56d22d3f6c4701ecced574385ab52] <==
	I1121 13:56:29.741011       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 13:56:29.741053       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 13:56:29.742785       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 13:56:29.743982       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1121 13:56:29.744027       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1121 13:56:29.744057       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1121 13:56:29.744066       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1121 13:56:29.744073       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1121 13:56:29.744086       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 13:56:29.748499       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 13:56:29.749950       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-243127" podCIDRs=["10.244.0.0/24"]
	I1121 13:56:29.753035       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 13:56:29.755186       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 13:56:29.755202       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 13:56:29.755209       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1121 13:56:59.748424       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1121 13:56:59.748554       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1121 13:56:59.748619       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1121 13:56:59.760372       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1121 13:56:59.765061       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1121 13:56:59.849012       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 13:56:59.866193       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 13:57:14.745391       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1121 13:57:29.854105       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1121 13:57:29.872778       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [7fb6fcbbcafef908292ed5964ecd5904153b8c7e75af509515148e2266c74ecc] <==
	I1121 13:56:31.039257       1 server_linux.go:53] "Using iptables proxy"
	I1121 13:56:31.100960       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 13:56:31.202087       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 13:56:31.202121       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1121 13:56:31.202228       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 13:56:31.219791       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 13:56:31.219836       1 server_linux.go:132] "Using iptables Proxier"
	I1121 13:56:31.225226       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 13:56:31.231141       1 server.go:527] "Version info" version="v1.34.1"
	I1121 13:56:31.231182       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 13:56:31.233076       1 config.go:200] "Starting service config controller"
	I1121 13:56:31.233101       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 13:56:31.237185       1 config.go:106] "Starting endpoint slice config controller"
	I1121 13:56:31.237203       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 13:56:31.237746       1 config.go:309] "Starting node config controller"
	I1121 13:56:31.238910       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 13:56:31.238929       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 13:56:31.238820       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 13:56:31.238955       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 13:56:31.333259       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 13:56:31.338853       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 13:56:31.345472       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [19610d1d8120b612dcda4fbab26734a764faf1bebe73cc951524320b474dea8b] <==
	E1121 13:56:22.757009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 13:56:22.757022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 13:56:22.757021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 13:56:22.757139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 13:56:22.757186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 13:56:22.757286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 13:56:22.757299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 13:56:22.757546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 13:56:22.757616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 13:56:22.757623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 13:56:22.757728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 13:56:22.757735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 13:56:22.757734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 13:56:22.757856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 13:56:22.757941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 13:56:23.605015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 13:56:23.757950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 13:56:23.791786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 13:56:23.840983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 13:56:23.843945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 13:56:23.844034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 13:56:23.901430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 13:56:23.920378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 13:56:23.956388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1121 13:56:24.255297       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 13:57:38 addons-243127 kubelet[1285]: I1121 13:57:38.374912    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-9k9gw" podStartSLOduration=0.729927975 podStartE2EDuration="26.37489568s" podCreationTimestamp="2025-11-21 13:57:12 +0000 UTC" firstStartedPulling="2025-11-21 13:57:12.519330491 +0000 UTC m=+47.489852726" lastFinishedPulling="2025-11-21 13:57:38.164298196 +0000 UTC m=+73.134820431" observedRunningTime="2025-11-21 13:57:38.374778418 +0000 UTC m=+73.345300671" watchObservedRunningTime="2025-11-21 13:57:38.37489568 +0000 UTC m=+73.345417933"
	Nov 21 13:57:39 addons-243127 kubelet[1285]: I1121 13:57:39.368926    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-9k9gw" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 13:57:41 addons-243127 kubelet[1285]: I1121 13:57:41.389209    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-hm6p2" podStartSLOduration=65.635493659 podStartE2EDuration="1m9.389189914s" podCreationTimestamp="2025-11-21 13:56:32 +0000 UTC" firstStartedPulling="2025-11-21 13:57:36.927066896 +0000 UTC m=+71.897589142" lastFinishedPulling="2025-11-21 13:57:40.680763161 +0000 UTC m=+75.651285397" observedRunningTime="2025-11-21 13:57:41.388551917 +0000 UTC m=+76.359074170" watchObservedRunningTime="2025-11-21 13:57:41.389189914 +0000 UTC m=+76.359712190"
	Nov 21 13:57:43 addons-243127 kubelet[1285]: I1121 13:57:43.157256    1285 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 21 13:57:43 addons-243127 kubelet[1285]: I1121 13:57:43.157297    1285 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 21 13:57:43 addons-243127 kubelet[1285]: E1121 13:57:43.843081    1285 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 21 13:57:43 addons-243127 kubelet[1285]: E1121 13:57:43.843167    1285 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf5b16be-3fde-465c-8a46-1b7fccb15f4f-gcr-creds podName:bf5b16be-3fde-465c-8a46-1b7fccb15f4f nodeName:}" failed. No retries permitted until 2025-11-21 13:58:15.843143716 +0000 UTC m=+110.813665949 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/bf5b16be-3fde-465c-8a46-1b7fccb15f4f-gcr-creds") pod "registry-creds-764b6fb674-w4fpg" (UID: "bf5b16be-3fde-465c-8a46-1b7fccb15f4f") : secret "registry-creds-gcr" not found
	Nov 21 13:57:45 addons-243127 kubelet[1285]: I1121 13:57:45.108386    1285 scope.go:117] "RemoveContainer" containerID="4aa7853574d679142fed3b107edcc2bc54dd336a5387193ff4676cac14b19681"
	Nov 21 13:57:46 addons-243127 kubelet[1285]: I1121 13:57:46.404609    1285 scope.go:117] "RemoveContainer" containerID="4aa7853574d679142fed3b107edcc2bc54dd336a5387193ff4676cac14b19681"
	Nov 21 13:57:46 addons-243127 kubelet[1285]: I1121 13:57:46.432168    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-4xdqt" podStartSLOduration=1.496158689 podStartE2EDuration="34.432148675s" podCreationTimestamp="2025-11-21 13:57:12 +0000 UTC" firstStartedPulling="2025-11-21 13:57:12.437354735 +0000 UTC m=+47.407876978" lastFinishedPulling="2025-11-21 13:57:45.373344712 +0000 UTC m=+80.343866964" observedRunningTime="2025-11-21 13:57:46.4316965 +0000 UTC m=+81.402218750" watchObservedRunningTime="2025-11-21 13:57:46.432148675 +0000 UTC m=+81.402670939"
	Nov 21 13:57:47 addons-243127 kubelet[1285]: I1121 13:57:47.462948    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-996c5" podStartSLOduration=66.038998618 podStartE2EDuration="1m8.462928687s" podCreationTimestamp="2025-11-21 13:56:39 +0000 UTC" firstStartedPulling="2025-11-21 13:57:44.162161032 +0000 UTC m=+79.132683285" lastFinishedPulling="2025-11-21 13:57:46.586091118 +0000 UTC m=+81.556613354" observedRunningTime="2025-11-21 13:57:47.450120986 +0000 UTC m=+82.420643239" watchObservedRunningTime="2025-11-21 13:57:47.462928687 +0000 UTC m=+82.433450940"
	Nov 21 13:57:47 addons-243127 kubelet[1285]: I1121 13:57:47.573054    1285 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th67g\" (UniqueName: \"kubernetes.io/projected/ca540f2a-5d7e-48bb-b157-2c6e330cd3a0-kube-api-access-th67g\") pod \"ca540f2a-5d7e-48bb-b157-2c6e330cd3a0\" (UID: \"ca540f2a-5d7e-48bb-b157-2c6e330cd3a0\") "
	Nov 21 13:57:47 addons-243127 kubelet[1285]: I1121 13:57:47.576365    1285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca540f2a-5d7e-48bb-b157-2c6e330cd3a0-kube-api-access-th67g" (OuterVolumeSpecName: "kube-api-access-th67g") pod "ca540f2a-5d7e-48bb-b157-2c6e330cd3a0" (UID: "ca540f2a-5d7e-48bb-b157-2c6e330cd3a0"). InnerVolumeSpecName "kube-api-access-th67g". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 21 13:57:47 addons-243127 kubelet[1285]: I1121 13:57:47.674574    1285 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-th67g\" (UniqueName: \"kubernetes.io/projected/ca540f2a-5d7e-48bb-b157-2c6e330cd3a0-kube-api-access-th67g\") on node \"addons-243127\" DevicePath \"\""
	Nov 21 13:57:48 addons-243127 kubelet[1285]: I1121 13:57:48.106817    1285 scope.go:117] "RemoveContainer" containerID="c7a1157a4375475b74b73005b57ac097adb2196dc8985d07a4a1e1b5a19570ea"
	Nov 21 13:57:48 addons-243127 kubelet[1285]: I1121 13:57:48.424232    1285 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afa676fb164d3a48ea2dc63e7950b1bc4c3d33d967b0e64c6b78e1280a3f3eb8"
	Nov 21 13:57:50 addons-243127 kubelet[1285]: I1121 13:57:50.434511    1285 scope.go:117] "RemoveContainer" containerID="c7a1157a4375475b74b73005b57ac097adb2196dc8985d07a4a1e1b5a19570ea"
	Nov 21 13:57:50 addons-243127 kubelet[1285]: I1121 13:57:50.446051    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-ztr8c" podStartSLOduration=72.448156257 podStartE2EDuration="1m18.446028431s" podCreationTimestamp="2025-11-21 13:56:32 +0000 UTC" firstStartedPulling="2025-11-21 13:57:44.167726929 +0000 UTC m=+79.138249174" lastFinishedPulling="2025-11-21 13:57:50.165599104 +0000 UTC m=+85.136121348" observedRunningTime="2025-11-21 13:57:50.442666313 +0000 UTC m=+85.413188566" watchObservedRunningTime="2025-11-21 13:57:50.446028431 +0000 UTC m=+85.416550687"
	Nov 21 13:57:51 addons-243127 kubelet[1285]: I1121 13:57:51.603245    1285 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bsdr\" (UniqueName: \"kubernetes.io/projected/c6bf0894-fbf4-4cf2-a482-c641ad671fa6-kube-api-access-7bsdr\") pod \"c6bf0894-fbf4-4cf2-a482-c641ad671fa6\" (UID: \"c6bf0894-fbf4-4cf2-a482-c641ad671fa6\") "
	Nov 21 13:57:51 addons-243127 kubelet[1285]: I1121 13:57:51.605796    1285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6bf0894-fbf4-4cf2-a482-c641ad671fa6-kube-api-access-7bsdr" (OuterVolumeSpecName: "kube-api-access-7bsdr") pod "c6bf0894-fbf4-4cf2-a482-c641ad671fa6" (UID: "c6bf0894-fbf4-4cf2-a482-c641ad671fa6"). InnerVolumeSpecName "kube-api-access-7bsdr". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 21 13:57:51 addons-243127 kubelet[1285]: I1121 13:57:51.704110    1285 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7bsdr\" (UniqueName: \"kubernetes.io/projected/c6bf0894-fbf4-4cf2-a482-c641ad671fa6-kube-api-access-7bsdr\") on node \"addons-243127\" DevicePath \"\""
	Nov 21 13:57:52 addons-243127 kubelet[1285]: I1121 13:57:52.446816    1285 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85486c55d7e1607de14dce97826b8cd98a74c6de4db0ec636a3f43fb75164cdf"
	Nov 21 13:57:52 addons-243127 kubelet[1285]: I1121 13:57:52.914341    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvqlz\" (UniqueName: \"kubernetes.io/projected/5ae2154d-5830-41e7-a8ff-ead5aef66f5c-kube-api-access-gvqlz\") pod \"busybox\" (UID: \"5ae2154d-5830-41e7-a8ff-ead5aef66f5c\") " pod="default/busybox"
	Nov 21 13:57:52 addons-243127 kubelet[1285]: I1121 13:57:52.914393    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5ae2154d-5830-41e7-a8ff-ead5aef66f5c-gcp-creds\") pod \"busybox\" (UID: \"5ae2154d-5830-41e7-a8ff-ead5aef66f5c\") " pod="default/busybox"
	Nov 21 13:57:54 addons-243127 kubelet[1285]: I1121 13:57:54.463188    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.887105512 podStartE2EDuration="2.463169372s" podCreationTimestamp="2025-11-21 13:57:52 +0000 UTC" firstStartedPulling="2025-11-21 13:57:53.04768675 +0000 UTC m=+88.018208996" lastFinishedPulling="2025-11-21 13:57:53.623750625 +0000 UTC m=+88.594272856" observedRunningTime="2025-11-21 13:57:54.462765545 +0000 UTC m=+89.433287798" watchObservedRunningTime="2025-11-21 13:57:54.463169372 +0000 UTC m=+89.433691625"
	
	
	==> storage-provisioner [7d8b7a3c495d7923335171c6b39b7aaad4571152b404188c58e3055e39def27a] <==
	W1121 13:57:38.584910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:57:40.588003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:57:40.591843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:57:42.594809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:57:42.598728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:57:44.601681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:57:44.608617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:57:46.611147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:57:46.615404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:57:48.618799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:57:48.624882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:57:50.626509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:57:50.630338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:57:52.633227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:57:52.636519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:57:54.638603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:57:54.641481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:57:56.643978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:57:56.649531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:57:58.651997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:57:58.655942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:58:00.659148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:58:00.663326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:58:02.666613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 13:58:02.671094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-243127 -n addons-243127
helpers_test.go:269: (dbg) Run:  kubectl --context addons-243127 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: gcp-auth-certs-create-s29b9 gcp-auth-certs-patch-7bgl5 ingress-nginx-admission-create-l9vg2 ingress-nginx-admission-patch-skmg2 registry-creds-764b6fb674-w4fpg
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-243127 describe pod gcp-auth-certs-create-s29b9 gcp-auth-certs-patch-7bgl5 ingress-nginx-admission-create-l9vg2 ingress-nginx-admission-patch-skmg2 registry-creds-764b6fb674-w4fpg
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-243127 describe pod gcp-auth-certs-create-s29b9 gcp-auth-certs-patch-7bgl5 ingress-nginx-admission-create-l9vg2 ingress-nginx-admission-patch-skmg2 registry-creds-764b6fb674-w4fpg: exit status 1 (56.489328ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-create-s29b9" not found
	Error from server (NotFound): pods "gcp-auth-certs-patch-7bgl5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-l9vg2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-skmg2" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-w4fpg" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-243127 describe pod gcp-auth-certs-create-s29b9 gcp-auth-certs-patch-7bgl5 ingress-nginx-admission-create-l9vg2 ingress-nginx-admission-patch-skmg2 registry-creds-764b6fb674-w4fpg: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-243127 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-243127 addons disable headlamp --alsologtostderr -v=1: exit status 11 (226.132309ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:58:03.416636   24998 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:58:03.416944   24998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:03.416954   24998 out.go:374] Setting ErrFile to fd 2...
	I1121 13:58:03.416958   24998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:03.417148   24998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 13:58:03.417375   24998 mustload.go:66] Loading cluster: addons-243127
	I1121 13:58:03.417704   24998 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:03.417718   24998 addons.go:622] checking whether the cluster is paused
	I1121 13:58:03.417799   24998 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:03.417815   24998 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:58:03.418154   24998 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:58:03.435052   24998 ssh_runner.go:195] Run: systemctl --version
	I1121 13:58:03.435109   24998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:58:03.451352   24998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:58:03.542737   24998 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:58:03.542795   24998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:58:03.570242   24998 cri.go:89] found id: "024fd155c71f400a95e31fc7ad96222849e4a688d5188b8418494fff998b02f8"
	I1121 13:58:03.570259   24998 cri.go:89] found id: "a45184ba10995c917c66320d64b28434d66ae84a7641d0c7c7c9435196c72b05"
	I1121 13:58:03.570263   24998 cri.go:89] found id: "c7cea569e79d903c00cfe8fa08fc613df7758703b3a3365f91c8a868e223391a"
	I1121 13:58:03.570266   24998 cri.go:89] found id: "f87ee9ca1eb0d9c8526d317e4709114848a958759fe6996309f72e558fcc76bd"
	I1121 13:58:03.570269   24998 cri.go:89] found id: "b69dec080641a7059fc510bee4a22d19e85c4143780d1fee7044f2fdb740f740"
	I1121 13:58:03.570272   24998 cri.go:89] found id: "733ab2d4f270d078eb3b7fb75ffde7d300a333a9eb60360fe6d7d27fb0875dd7"
	I1121 13:58:03.570274   24998 cri.go:89] found id: "6d89596515e60da3c6350bc1ead48b9dbbf7532b2b24203a2cac8a9359f130a5"
	I1121 13:58:03.570277   24998 cri.go:89] found id: "12a7ad65b01f342d9f1789252a52da86937f6e873020a7041cff37d3b28aaf6f"
	I1121 13:58:03.570279   24998 cri.go:89] found id: "3179500ac77197cbb987c594d2e651bf8097474fdf25f2ce1512534d31d41788"
	I1121 13:58:03.570283   24998 cri.go:89] found id: "994774c9ca4f6f8e3514b947f0b8fda8fa47d7542203f328719915376b9619b8"
	I1121 13:58:03.570286   24998 cri.go:89] found id: "32c8ebd6346418c59c61269d61198ed0c8fdb4a99d46cc2ba298869b17e82675"
	I1121 13:58:03.570289   24998 cri.go:89] found id: "56057aee31072304afba5ad58c29a181e243ff2e0f856de3cc6a72a06aa40534"
	I1121 13:58:03.570292   24998 cri.go:89] found id: "77eb40d30250c88e9becb416f1d606fd898acc0a86c49c8005d72a9268c0d3f1"
	I1121 13:58:03.570296   24998 cri.go:89] found id: "7d1e97c795b004c26d0da895539dc886fe57268b3dac72ee7d7de356e86f6014"
	I1121 13:58:03.570300   24998 cri.go:89] found id: "b3df341d90d52b5ef2ee3a00f8e67c97d074f486b504b70f4bd9ca36e586af13"
	I1121 13:58:03.570314   24998 cri.go:89] found id: "a862b6c84241dc48d722f3ee0bd89241e61135843ff33148c7f534cbf5f5680c"
	I1121 13:58:03.570333   24998 cri.go:89] found id: "7d8b7a3c495d7923335171c6b39b7aaad4571152b404188c58e3055e39def27a"
	I1121 13:58:03.570338   24998 cri.go:89] found id: "8a5df4965546d743393a4b617fc40ffb5e887a5847cf860fc0a2bf8ca9d53262"
	I1121 13:58:03.570342   24998 cri.go:89] found id: "7fb6fcbbcafef908292ed5964ecd5904153b8c7e75af509515148e2266c74ecc"
	I1121 13:58:03.570346   24998 cri.go:89] found id: "66f416a2611818263d3d82ca7e1c1dc51f8f8373c1ef7e282ba527f4ea204081"
	I1121 13:58:03.570351   24998 cri.go:89] found id: "b49596a0b2d4dc21f077381ec50d10f903e56d22d3f6c4701ecced574385ab52"
	I1121 13:58:03.570355   24998 cri.go:89] found id: "6bc0a23d21b59305e5afddf76ffb436c468bd915a8bb02d9d71592825dafd624"
	I1121 13:58:03.570358   24998 cri.go:89] found id: "61ca322c069424488425150fa2cb0785f8dae51cb585322a28bd81a85d0796f1"
	I1121 13:58:03.570363   24998 cri.go:89] found id: "19610d1d8120b612dcda4fbab26734a764faf1bebe73cc951524320b474dea8b"
	I1121 13:58:03.570367   24998 cri.go:89] found id: ""
	I1121 13:58:03.570408   24998 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:58:03.583046   24998 out.go:203] 
	W1121 13:58:03.584072   24998 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:58:03.584086   24998 out.go:285] * 
	* 
	W1121 13:58:03.586961   24998 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:58:03.587993   24998 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-243127 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.36s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-mpbzj" [48d854fd-8e15-4d9a-a965-2c26d3a73f95] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00194237s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-243127 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-243127 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (245.11453ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:58:06.293502   25117 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:58:06.293861   25117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:06.293872   25117 out.go:374] Setting ErrFile to fd 2...
	I1121 13:58:06.293877   25117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:06.294082   25117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 13:58:06.294318   25117 mustload.go:66] Loading cluster: addons-243127
	I1121 13:58:06.294662   25117 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:06.294679   25117 addons.go:622] checking whether the cluster is paused
	I1121 13:58:06.294757   25117 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:06.294767   25117 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:58:06.295120   25117 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:58:06.313259   25117 ssh_runner.go:195] Run: systemctl --version
	I1121 13:58:06.313322   25117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:58:06.331369   25117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:58:06.426693   25117 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:58:06.426768   25117 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:58:06.456089   25117 cri.go:89] found id: "024fd155c71f400a95e31fc7ad96222849e4a688d5188b8418494fff998b02f8"
	I1121 13:58:06.456112   25117 cri.go:89] found id: "a45184ba10995c917c66320d64b28434d66ae84a7641d0c7c7c9435196c72b05"
	I1121 13:58:06.456116   25117 cri.go:89] found id: "c7cea569e79d903c00cfe8fa08fc613df7758703b3a3365f91c8a868e223391a"
	I1121 13:58:06.456119   25117 cri.go:89] found id: "f87ee9ca1eb0d9c8526d317e4709114848a958759fe6996309f72e558fcc76bd"
	I1121 13:58:06.456122   25117 cri.go:89] found id: "b69dec080641a7059fc510bee4a22d19e85c4143780d1fee7044f2fdb740f740"
	I1121 13:58:06.456127   25117 cri.go:89] found id: "733ab2d4f270d078eb3b7fb75ffde7d300a333a9eb60360fe6d7d27fb0875dd7"
	I1121 13:58:06.456131   25117 cri.go:89] found id: "6d89596515e60da3c6350bc1ead48b9dbbf7532b2b24203a2cac8a9359f130a5"
	I1121 13:58:06.456135   25117 cri.go:89] found id: "12a7ad65b01f342d9f1789252a52da86937f6e873020a7041cff37d3b28aaf6f"
	I1121 13:58:06.456139   25117 cri.go:89] found id: "3179500ac77197cbb987c594d2e651bf8097474fdf25f2ce1512534d31d41788"
	I1121 13:58:06.456150   25117 cri.go:89] found id: "994774c9ca4f6f8e3514b947f0b8fda8fa47d7542203f328719915376b9619b8"
	I1121 13:58:06.456155   25117 cri.go:89] found id: "32c8ebd6346418c59c61269d61198ed0c8fdb4a99d46cc2ba298869b17e82675"
	I1121 13:58:06.456159   25117 cri.go:89] found id: "56057aee31072304afba5ad58c29a181e243ff2e0f856de3cc6a72a06aa40534"
	I1121 13:58:06.456163   25117 cri.go:89] found id: "77eb40d30250c88e9becb416f1d606fd898acc0a86c49c8005d72a9268c0d3f1"
	I1121 13:58:06.456167   25117 cri.go:89] found id: "7d1e97c795b004c26d0da895539dc886fe57268b3dac72ee7d7de356e86f6014"
	I1121 13:58:06.456171   25117 cri.go:89] found id: "b3df341d90d52b5ef2ee3a00f8e67c97d074f486b504b70f4bd9ca36e586af13"
	I1121 13:58:06.456183   25117 cri.go:89] found id: "a862b6c84241dc48d722f3ee0bd89241e61135843ff33148c7f534cbf5f5680c"
	I1121 13:58:06.456191   25117 cri.go:89] found id: "7d8b7a3c495d7923335171c6b39b7aaad4571152b404188c58e3055e39def27a"
	I1121 13:58:06.456195   25117 cri.go:89] found id: "8a5df4965546d743393a4b617fc40ffb5e887a5847cf860fc0a2bf8ca9d53262"
	I1121 13:58:06.456198   25117 cri.go:89] found id: "7fb6fcbbcafef908292ed5964ecd5904153b8c7e75af509515148e2266c74ecc"
	I1121 13:58:06.456200   25117 cri.go:89] found id: "66f416a2611818263d3d82ca7e1c1dc51f8f8373c1ef7e282ba527f4ea204081"
	I1121 13:58:06.456203   25117 cri.go:89] found id: "b49596a0b2d4dc21f077381ec50d10f903e56d22d3f6c4701ecced574385ab52"
	I1121 13:58:06.456205   25117 cri.go:89] found id: "6bc0a23d21b59305e5afddf76ffb436c468bd915a8bb02d9d71592825dafd624"
	I1121 13:58:06.456207   25117 cri.go:89] found id: "61ca322c069424488425150fa2cb0785f8dae51cb585322a28bd81a85d0796f1"
	I1121 13:58:06.456210   25117 cri.go:89] found id: "19610d1d8120b612dcda4fbab26734a764faf1bebe73cc951524320b474dea8b"
	I1121 13:58:06.456212   25117 cri.go:89] found id: ""
	I1121 13:58:06.456257   25117 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:58:06.469529   25117 out.go:203] 
	W1121 13:58:06.470915   25117 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:58:06.470938   25117 out.go:285] * 
	* 
	W1121 13:58:06.476193   25117 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:58:06.477298   25117 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-243127 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.05s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-243127 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-243127 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-243127 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [207aa9f5-11c6-41aa-956d-e25cae218b7d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [207aa9f5-11c6-41aa-956d-e25cae218b7d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [207aa9f5-11c6-41aa-956d-e25cae218b7d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002747411s
addons_test.go:967: (dbg) Run:  kubectl --context addons-243127 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-243127 ssh "cat /opt/local-path-provisioner/pvc-c29fcf8b-bba0-4719-9424-7448a031a85f_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-243127 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-243127 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-243127 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-243127 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (229.960862ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:58:23.923712   27666 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:58:23.924040   27666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:23.924052   27666 out.go:374] Setting ErrFile to fd 2...
	I1121 13:58:23.924059   27666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:23.924281   27666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 13:58:23.924621   27666 mustload.go:66] Loading cluster: addons-243127
	I1121 13:58:23.925080   27666 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:23.925104   27666 addons.go:622] checking whether the cluster is paused
	I1121 13:58:23.925191   27666 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:23.925203   27666 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:58:23.925616   27666 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:58:23.942366   27666 ssh_runner.go:195] Run: systemctl --version
	I1121 13:58:23.942414   27666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:58:23.959059   27666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:58:24.050740   27666 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:58:24.050809   27666 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:58:24.077136   27666 cri.go:89] found id: "0b9bbd56ef6c76ed87506b65b92848c527e40f5c539ec9f68e55961be7ec43c9"
	I1121 13:58:24.077158   27666 cri.go:89] found id: "024fd155c71f400a95e31fc7ad96222849e4a688d5188b8418494fff998b02f8"
	I1121 13:58:24.077162   27666 cri.go:89] found id: "a45184ba10995c917c66320d64b28434d66ae84a7641d0c7c7c9435196c72b05"
	I1121 13:58:24.077165   27666 cri.go:89] found id: "c7cea569e79d903c00cfe8fa08fc613df7758703b3a3365f91c8a868e223391a"
	I1121 13:58:24.077168   27666 cri.go:89] found id: "f87ee9ca1eb0d9c8526d317e4709114848a958759fe6996309f72e558fcc76bd"
	I1121 13:58:24.077175   27666 cri.go:89] found id: "b69dec080641a7059fc510bee4a22d19e85c4143780d1fee7044f2fdb740f740"
	I1121 13:58:24.077178   27666 cri.go:89] found id: "733ab2d4f270d078eb3b7fb75ffde7d300a333a9eb60360fe6d7d27fb0875dd7"
	I1121 13:58:24.077180   27666 cri.go:89] found id: "6d89596515e60da3c6350bc1ead48b9dbbf7532b2b24203a2cac8a9359f130a5"
	I1121 13:58:24.077183   27666 cri.go:89] found id: "12a7ad65b01f342d9f1789252a52da86937f6e873020a7041cff37d3b28aaf6f"
	I1121 13:58:24.077192   27666 cri.go:89] found id: "3179500ac77197cbb987c594d2e651bf8097474fdf25f2ce1512534d31d41788"
	I1121 13:58:24.077195   27666 cri.go:89] found id: "994774c9ca4f6f8e3514b947f0b8fda8fa47d7542203f328719915376b9619b8"
	I1121 13:58:24.077198   27666 cri.go:89] found id: "32c8ebd6346418c59c61269d61198ed0c8fdb4a99d46cc2ba298869b17e82675"
	I1121 13:58:24.077200   27666 cri.go:89] found id: "56057aee31072304afba5ad58c29a181e243ff2e0f856de3cc6a72a06aa40534"
	I1121 13:58:24.077203   27666 cri.go:89] found id: "77eb40d30250c88e9becb416f1d606fd898acc0a86c49c8005d72a9268c0d3f1"
	I1121 13:58:24.077205   27666 cri.go:89] found id: "7d1e97c795b004c26d0da895539dc886fe57268b3dac72ee7d7de356e86f6014"
	I1121 13:58:24.077216   27666 cri.go:89] found id: "b3df341d90d52b5ef2ee3a00f8e67c97d074f486b504b70f4bd9ca36e586af13"
	I1121 13:58:24.077222   27666 cri.go:89] found id: "a862b6c84241dc48d722f3ee0bd89241e61135843ff33148c7f534cbf5f5680c"
	I1121 13:58:24.077226   27666 cri.go:89] found id: "7d8b7a3c495d7923335171c6b39b7aaad4571152b404188c58e3055e39def27a"
	I1121 13:58:24.077229   27666 cri.go:89] found id: "8a5df4965546d743393a4b617fc40ffb5e887a5847cf860fc0a2bf8ca9d53262"
	I1121 13:58:24.077231   27666 cri.go:89] found id: "7fb6fcbbcafef908292ed5964ecd5904153b8c7e75af509515148e2266c74ecc"
	I1121 13:58:24.077234   27666 cri.go:89] found id: "66f416a2611818263d3d82ca7e1c1dc51f8f8373c1ef7e282ba527f4ea204081"
	I1121 13:58:24.077236   27666 cri.go:89] found id: "b49596a0b2d4dc21f077381ec50d10f903e56d22d3f6c4701ecced574385ab52"
	I1121 13:58:24.077238   27666 cri.go:89] found id: "6bc0a23d21b59305e5afddf76ffb436c468bd915a8bb02d9d71592825dafd624"
	I1121 13:58:24.077241   27666 cri.go:89] found id: "61ca322c069424488425150fa2cb0785f8dae51cb585322a28bd81a85d0796f1"
	I1121 13:58:24.077243   27666 cri.go:89] found id: "19610d1d8120b612dcda4fbab26734a764faf1bebe73cc951524320b474dea8b"
	I1121 13:58:24.077245   27666 cri.go:89] found id: ""
	I1121 13:58:24.077291   27666 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:58:24.090761   27666 out.go:203] 
	W1121 13:58:24.091956   27666 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:58:24.091975   27666 out.go:285] * 
	* 
	W1121 13:58:24.094942   27666 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:58:24.096589   27666 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-243127 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.05s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-v2h2s" [915d9baa-5e34-4320-9fe6-d65726ad8bb0] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003398267s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-243127 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-243127 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (246.208536ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:58:06.293504   25116 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:58:06.293682   25116 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:06.293694   25116 out.go:374] Setting ErrFile to fd 2...
	I1121 13:58:06.293697   25116 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:06.293866   25116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 13:58:06.294102   25116 mustload.go:66] Loading cluster: addons-243127
	I1121 13:58:06.294399   25116 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:06.294419   25116 addons.go:622] checking whether the cluster is paused
	I1121 13:58:06.294502   25116 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:06.294512   25116 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:58:06.294937   25116 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:58:06.312847   25116 ssh_runner.go:195] Run: systemctl --version
	I1121 13:58:06.312897   25116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:58:06.331014   25116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:58:06.426478   25116 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:58:06.426551   25116 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:58:06.455017   25116 cri.go:89] found id: "024fd155c71f400a95e31fc7ad96222849e4a688d5188b8418494fff998b02f8"
	I1121 13:58:06.455052   25116 cri.go:89] found id: "a45184ba10995c917c66320d64b28434d66ae84a7641d0c7c7c9435196c72b05"
	I1121 13:58:06.455057   25116 cri.go:89] found id: "c7cea569e79d903c00cfe8fa08fc613df7758703b3a3365f91c8a868e223391a"
	I1121 13:58:06.455063   25116 cri.go:89] found id: "f87ee9ca1eb0d9c8526d317e4709114848a958759fe6996309f72e558fcc76bd"
	I1121 13:58:06.455067   25116 cri.go:89] found id: "b69dec080641a7059fc510bee4a22d19e85c4143780d1fee7044f2fdb740f740"
	I1121 13:58:06.455072   25116 cri.go:89] found id: "733ab2d4f270d078eb3b7fb75ffde7d300a333a9eb60360fe6d7d27fb0875dd7"
	I1121 13:58:06.455076   25116 cri.go:89] found id: "6d89596515e60da3c6350bc1ead48b9dbbf7532b2b24203a2cac8a9359f130a5"
	I1121 13:58:06.455081   25116 cri.go:89] found id: "12a7ad65b01f342d9f1789252a52da86937f6e873020a7041cff37d3b28aaf6f"
	I1121 13:58:06.455086   25116 cri.go:89] found id: "3179500ac77197cbb987c594d2e651bf8097474fdf25f2ce1512534d31d41788"
	I1121 13:58:06.455095   25116 cri.go:89] found id: "994774c9ca4f6f8e3514b947f0b8fda8fa47d7542203f328719915376b9619b8"
	I1121 13:58:06.455099   25116 cri.go:89] found id: "32c8ebd6346418c59c61269d61198ed0c8fdb4a99d46cc2ba298869b17e82675"
	I1121 13:58:06.455105   25116 cri.go:89] found id: "56057aee31072304afba5ad58c29a181e243ff2e0f856de3cc6a72a06aa40534"
	I1121 13:58:06.455109   25116 cri.go:89] found id: "77eb40d30250c88e9becb416f1d606fd898acc0a86c49c8005d72a9268c0d3f1"
	I1121 13:58:06.455114   25116 cri.go:89] found id: "7d1e97c795b004c26d0da895539dc886fe57268b3dac72ee7d7de356e86f6014"
	I1121 13:58:06.455119   25116 cri.go:89] found id: "b3df341d90d52b5ef2ee3a00f8e67c97d074f486b504b70f4bd9ca36e586af13"
	I1121 13:58:06.455136   25116 cri.go:89] found id: "a862b6c84241dc48d722f3ee0bd89241e61135843ff33148c7f534cbf5f5680c"
	I1121 13:58:06.455144   25116 cri.go:89] found id: "7d8b7a3c495d7923335171c6b39b7aaad4571152b404188c58e3055e39def27a"
	I1121 13:58:06.455149   25116 cri.go:89] found id: "8a5df4965546d743393a4b617fc40ffb5e887a5847cf860fc0a2bf8ca9d53262"
	I1121 13:58:06.455152   25116 cri.go:89] found id: "7fb6fcbbcafef908292ed5964ecd5904153b8c7e75af509515148e2266c74ecc"
	I1121 13:58:06.455156   25116 cri.go:89] found id: "66f416a2611818263d3d82ca7e1c1dc51f8f8373c1ef7e282ba527f4ea204081"
	I1121 13:58:06.455164   25116 cri.go:89] found id: "b49596a0b2d4dc21f077381ec50d10f903e56d22d3f6c4701ecced574385ab52"
	I1121 13:58:06.455171   25116 cri.go:89] found id: "6bc0a23d21b59305e5afddf76ffb436c468bd915a8bb02d9d71592825dafd624"
	I1121 13:58:06.455175   25116 cri.go:89] found id: "61ca322c069424488425150fa2cb0785f8dae51cb585322a28bd81a85d0796f1"
	I1121 13:58:06.455179   25116 cri.go:89] found id: "19610d1d8120b612dcda4fbab26734a764faf1bebe73cc951524320b474dea8b"
	I1121 13:58:06.455183   25116 cri.go:89] found id: ""
	I1121 13:58:06.455227   25116 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:58:06.468770   25116 out.go:203] 
	W1121 13:58:06.470285   25116 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:58:06.470303   25116 out.go:285] * 
	* 
	W1121 13:58:06.475723   25116 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:58:06.476730   25116 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-243127 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-mwqnd" [c67c445a-54f8-4dd5-94be-20d59d5d46a8] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.002984905s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-243127 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-243127 addons disable yakd --alsologtostderr -v=1: exit status 11 (245.447083ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:58:23.528384   27500 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:58:23.528758   27500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:23.528773   27500 out.go:374] Setting ErrFile to fd 2...
	I1121 13:58:23.528781   27500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:23.528996   27500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 13:58:23.529321   27500 mustload.go:66] Loading cluster: addons-243127
	I1121 13:58:23.529676   27500 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:23.529695   27500 addons.go:622] checking whether the cluster is paused
	I1121 13:58:23.529796   27500 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:23.529809   27500 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:58:23.530145   27500 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:58:23.548370   27500 ssh_runner.go:195] Run: systemctl --version
	I1121 13:58:23.548419   27500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:58:23.564488   27500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:58:23.656614   27500 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:58:23.656682   27500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:58:23.686967   27500 cri.go:89] found id: "0b9bbd56ef6c76ed87506b65b92848c527e40f5c539ec9f68e55961be7ec43c9"
	I1121 13:58:23.686988   27500 cri.go:89] found id: "024fd155c71f400a95e31fc7ad96222849e4a688d5188b8418494fff998b02f8"
	I1121 13:58:23.686993   27500 cri.go:89] found id: "a45184ba10995c917c66320d64b28434d66ae84a7641d0c7c7c9435196c72b05"
	I1121 13:58:23.686997   27500 cri.go:89] found id: "c7cea569e79d903c00cfe8fa08fc613df7758703b3a3365f91c8a868e223391a"
	I1121 13:58:23.687001   27500 cri.go:89] found id: "f87ee9ca1eb0d9c8526d317e4709114848a958759fe6996309f72e558fcc76bd"
	I1121 13:58:23.687012   27500 cri.go:89] found id: "b69dec080641a7059fc510bee4a22d19e85c4143780d1fee7044f2fdb740f740"
	I1121 13:58:23.687016   27500 cri.go:89] found id: "733ab2d4f270d078eb3b7fb75ffde7d300a333a9eb60360fe6d7d27fb0875dd7"
	I1121 13:58:23.687020   27500 cri.go:89] found id: "6d89596515e60da3c6350bc1ead48b9dbbf7532b2b24203a2cac8a9359f130a5"
	I1121 13:58:23.687024   27500 cri.go:89] found id: "12a7ad65b01f342d9f1789252a52da86937f6e873020a7041cff37d3b28aaf6f"
	I1121 13:58:23.687030   27500 cri.go:89] found id: "3179500ac77197cbb987c594d2e651bf8097474fdf25f2ce1512534d31d41788"
	I1121 13:58:23.687047   27500 cri.go:89] found id: "994774c9ca4f6f8e3514b947f0b8fda8fa47d7542203f328719915376b9619b8"
	I1121 13:58:23.687050   27500 cri.go:89] found id: "32c8ebd6346418c59c61269d61198ed0c8fdb4a99d46cc2ba298869b17e82675"
	I1121 13:58:23.687054   27500 cri.go:89] found id: "56057aee31072304afba5ad58c29a181e243ff2e0f856de3cc6a72a06aa40534"
	I1121 13:58:23.687057   27500 cri.go:89] found id: "77eb40d30250c88e9becb416f1d606fd898acc0a86c49c8005d72a9268c0d3f1"
	I1121 13:58:23.687061   27500 cri.go:89] found id: "7d1e97c795b004c26d0da895539dc886fe57268b3dac72ee7d7de356e86f6014"
	I1121 13:58:23.687067   27500 cri.go:89] found id: "b3df341d90d52b5ef2ee3a00f8e67c97d074f486b504b70f4bd9ca36e586af13"
	I1121 13:58:23.687071   27500 cri.go:89] found id: "a862b6c84241dc48d722f3ee0bd89241e61135843ff33148c7f534cbf5f5680c"
	I1121 13:58:23.687075   27500 cri.go:89] found id: "7d8b7a3c495d7923335171c6b39b7aaad4571152b404188c58e3055e39def27a"
	I1121 13:58:23.687079   27500 cri.go:89] found id: "8a5df4965546d743393a4b617fc40ffb5e887a5847cf860fc0a2bf8ca9d53262"
	I1121 13:58:23.687082   27500 cri.go:89] found id: "7fb6fcbbcafef908292ed5964ecd5904153b8c7e75af509515148e2266c74ecc"
	I1121 13:58:23.687086   27500 cri.go:89] found id: "66f416a2611818263d3d82ca7e1c1dc51f8f8373c1ef7e282ba527f4ea204081"
	I1121 13:58:23.687089   27500 cri.go:89] found id: "b49596a0b2d4dc21f077381ec50d10f903e56d22d3f6c4701ecced574385ab52"
	I1121 13:58:23.687092   27500 cri.go:89] found id: "6bc0a23d21b59305e5afddf76ffb436c468bd915a8bb02d9d71592825dafd624"
	I1121 13:58:23.687096   27500 cri.go:89] found id: "61ca322c069424488425150fa2cb0785f8dae51cb585322a28bd81a85d0796f1"
	I1121 13:58:23.687099   27500 cri.go:89] found id: "19610d1d8120b612dcda4fbab26734a764faf1bebe73cc951524320b474dea8b"
	I1121 13:58:23.687103   27500 cri.go:89] found id: ""
	I1121 13:58:23.687144   27500 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:58:23.702664   27500 out.go:203] 
	W1121 13:58:23.704096   27500 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:58:23.704117   27500 out.go:285] * 
	* 
	W1121 13:58:23.708596   27500 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:58:23.709653   27500 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-243127 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.25s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-rs4wk" [d044fde9-5989-433c-bea4-d92a04c49500] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.002706626s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-243127 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-243127 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (239.056841ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 13:58:18.274928   27108 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:58:18.275225   27108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:18.275236   27108 out.go:374] Setting ErrFile to fd 2...
	I1121 13:58:18.275240   27108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:58:18.275410   27108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 13:58:18.275648   27108 mustload.go:66] Loading cluster: addons-243127
	I1121 13:58:18.276000   27108 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:18.276017   27108 addons.go:622] checking whether the cluster is paused
	I1121 13:58:18.276101   27108 config.go:182] Loaded profile config "addons-243127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 13:58:18.276112   27108 host.go:66] Checking if "addons-243127" exists ...
	I1121 13:58:18.276442   27108 cli_runner.go:164] Run: docker container inspect addons-243127 --format={{.State.Status}}
	I1121 13:58:18.296291   27108 ssh_runner.go:195] Run: systemctl --version
	I1121 13:58:18.296334   27108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-243127
	I1121 13:58:18.314799   27108 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/addons-243127/id_rsa Username:docker}
	I1121 13:58:18.413269   27108 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 13:58:18.413351   27108 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 13:58:18.441659   27108 cri.go:89] found id: "0b9bbd56ef6c76ed87506b65b92848c527e40f5c539ec9f68e55961be7ec43c9"
	I1121 13:58:18.441675   27108 cri.go:89] found id: "024fd155c71f400a95e31fc7ad96222849e4a688d5188b8418494fff998b02f8"
	I1121 13:58:18.441680   27108 cri.go:89] found id: "a45184ba10995c917c66320d64b28434d66ae84a7641d0c7c7c9435196c72b05"
	I1121 13:58:18.441683   27108 cri.go:89] found id: "c7cea569e79d903c00cfe8fa08fc613df7758703b3a3365f91c8a868e223391a"
	I1121 13:58:18.441685   27108 cri.go:89] found id: "f87ee9ca1eb0d9c8526d317e4709114848a958759fe6996309f72e558fcc76bd"
	I1121 13:58:18.441690   27108 cri.go:89] found id: "b69dec080641a7059fc510bee4a22d19e85c4143780d1fee7044f2fdb740f740"
	I1121 13:58:18.441692   27108 cri.go:89] found id: "733ab2d4f270d078eb3b7fb75ffde7d300a333a9eb60360fe6d7d27fb0875dd7"
	I1121 13:58:18.441694   27108 cri.go:89] found id: "6d89596515e60da3c6350bc1ead48b9dbbf7532b2b24203a2cac8a9359f130a5"
	I1121 13:58:18.441697   27108 cri.go:89] found id: "12a7ad65b01f342d9f1789252a52da86937f6e873020a7041cff37d3b28aaf6f"
	I1121 13:58:18.441710   27108 cri.go:89] found id: "3179500ac77197cbb987c594d2e651bf8097474fdf25f2ce1512534d31d41788"
	I1121 13:58:18.441715   27108 cri.go:89] found id: "994774c9ca4f6f8e3514b947f0b8fda8fa47d7542203f328719915376b9619b8"
	I1121 13:58:18.441718   27108 cri.go:89] found id: "32c8ebd6346418c59c61269d61198ed0c8fdb4a99d46cc2ba298869b17e82675"
	I1121 13:58:18.441720   27108 cri.go:89] found id: "56057aee31072304afba5ad58c29a181e243ff2e0f856de3cc6a72a06aa40534"
	I1121 13:58:18.441722   27108 cri.go:89] found id: "77eb40d30250c88e9becb416f1d606fd898acc0a86c49c8005d72a9268c0d3f1"
	I1121 13:58:18.441725   27108 cri.go:89] found id: "7d1e97c795b004c26d0da895539dc886fe57268b3dac72ee7d7de356e86f6014"
	I1121 13:58:18.441729   27108 cri.go:89] found id: "b3df341d90d52b5ef2ee3a00f8e67c97d074f486b504b70f4bd9ca36e586af13"
	I1121 13:58:18.441734   27108 cri.go:89] found id: "a862b6c84241dc48d722f3ee0bd89241e61135843ff33148c7f534cbf5f5680c"
	I1121 13:58:18.441738   27108 cri.go:89] found id: "7d8b7a3c495d7923335171c6b39b7aaad4571152b404188c58e3055e39def27a"
	I1121 13:58:18.441744   27108 cri.go:89] found id: "8a5df4965546d743393a4b617fc40ffb5e887a5847cf860fc0a2bf8ca9d53262"
	I1121 13:58:18.441747   27108 cri.go:89] found id: "7fb6fcbbcafef908292ed5964ecd5904153b8c7e75af509515148e2266c74ecc"
	I1121 13:58:18.441749   27108 cri.go:89] found id: "66f416a2611818263d3d82ca7e1c1dc51f8f8373c1ef7e282ba527f4ea204081"
	I1121 13:58:18.441752   27108 cri.go:89] found id: "b49596a0b2d4dc21f077381ec50d10f903e56d22d3f6c4701ecced574385ab52"
	I1121 13:58:18.441755   27108 cri.go:89] found id: "6bc0a23d21b59305e5afddf76ffb436c468bd915a8bb02d9d71592825dafd624"
	I1121 13:58:18.441760   27108 cri.go:89] found id: "61ca322c069424488425150fa2cb0785f8dae51cb585322a28bd81a85d0796f1"
	I1121 13:58:18.441763   27108 cri.go:89] found id: "19610d1d8120b612dcda4fbab26734a764faf1bebe73cc951524320b474dea8b"
	I1121 13:58:18.441766   27108 cri.go:89] found id: ""
	I1121 13:58:18.441804   27108 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 13:58:18.454413   27108 out.go:203] 
	W1121 13:58:18.455620   27108 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T13:58:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 13:58:18.455639   27108 out.go:285] * 
	* 
	W1121 13:58:18.458550   27108 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 13:58:18.459722   27108 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-243127 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-179014 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-179014 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-6xn2m" [650cb18b-160a-4099-8353-48a47ac5d9f3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-179014 -n functional-179014
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-21 14:14:25.02518738 +0000 UTC m=+1127.039887349
functional_test.go:1645: (dbg) Run:  kubectl --context functional-179014 describe po hello-node-connect-7d85dfc575-6xn2m -n default
functional_test.go:1645: (dbg) kubectl --context functional-179014 describe po hello-node-connect-7d85dfc575-6xn2m -n default:
Name:             hello-node-connect-7d85dfc575-6xn2m
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-179014/192.168.49.2
Start Time:       Fri, 21 Nov 2025 14:04:24 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r6w59 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-r6w59:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6xn2m to functional-179014
Normal   Pulling    7m1s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m1s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m1s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m49s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m49s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-179014 logs hello-node-connect-7d85dfc575-6xn2m -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-179014 logs hello-node-connect-7d85dfc575-6xn2m -n default: exit status 1 (64.467185ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-6xn2m" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-179014 logs hello-node-connect-7d85dfc575-6xn2m -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-179014 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-6xn2m
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-179014/192.168.49.2
Start Time:       Fri, 21 Nov 2025 14:04:24 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r6w59 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-r6w59:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6xn2m to functional-179014
Normal   Pulling    7m1s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m1s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m1s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m49s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m49s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-179014 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-179014 logs -l app=hello-node-connect: exit status 1 (56.541564ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-6xn2m" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-179014 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-179014 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.105.236.139
IPs:                      10.105.236.139
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30856/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-179014
helpers_test.go:243: (dbg) docker inspect functional-179014:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "84b3de09c5d95277631e303db57e3d6b2e4a64387c374f8ea6c176e9fe17c54d",
	        "Created": "2025-11-21T14:01:54.015121472Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 38176,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:01:54.046989367Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/84b3de09c5d95277631e303db57e3d6b2e4a64387c374f8ea6c176e9fe17c54d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/84b3de09c5d95277631e303db57e3d6b2e4a64387c374f8ea6c176e9fe17c54d/hostname",
	        "HostsPath": "/var/lib/docker/containers/84b3de09c5d95277631e303db57e3d6b2e4a64387c374f8ea6c176e9fe17c54d/hosts",
	        "LogPath": "/var/lib/docker/containers/84b3de09c5d95277631e303db57e3d6b2e4a64387c374f8ea6c176e9fe17c54d/84b3de09c5d95277631e303db57e3d6b2e4a64387c374f8ea6c176e9fe17c54d-json.log",
	        "Name": "/functional-179014",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-179014:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-179014",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "84b3de09c5d95277631e303db57e3d6b2e4a64387c374f8ea6c176e9fe17c54d",
	                "LowerDir": "/var/lib/docker/overlay2/8df8a1b8f4ce1135c8f447913e93aa838bb7d2abfb442bdcd8acf730ba5d6e6e-init/diff:/var/lib/docker/overlay2/52a35d389e2d282a009a54c52e3f2c3f22a4d7ab5a2644a29d16127d21682576/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8df8a1b8f4ce1135c8f447913e93aa838bb7d2abfb442bdcd8acf730ba5d6e6e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8df8a1b8f4ce1135c8f447913e93aa838bb7d2abfb442bdcd8acf730ba5d6e6e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8df8a1b8f4ce1135c8f447913e93aa838bb7d2abfb442bdcd8acf730ba5d6e6e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-179014",
	                "Source": "/var/lib/docker/volumes/functional-179014/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-179014",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-179014",
	                "name.minikube.sigs.k8s.io": "functional-179014",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "35e07bcdf8bb8f310970de25429b0783bfa143414edc628e05e14e1bb461f452",
	            "SandboxKey": "/var/run/docker/netns/35e07bcdf8bb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-179014": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d0e41c7a05ae7a349c6313e5e042b60a145f66460a5ac943439988e72e4ca60e",
	                    "EndpointID": "c386df85419e8b86bd8122839fe7941cec70f42bf5c28764f2a305a23590cfbf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "f6:18:2f:25:ea:1f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-179014",
	                        "84b3de09c5d9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-179014 -n functional-179014
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-179014 logs -n 25: (1.154105557s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-179014 ssh sudo cat /etc/ssl/certs/14542.pem                                                                    │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:04 UTC │ 21 Nov 25 14:04 UTC │
	│ ssh            │ functional-179014 ssh sudo cat /usr/share/ca-certificates/14542.pem                                                        │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:04 UTC │ 21 Nov 25 14:04 UTC │
	│ ssh            │ functional-179014 ssh sudo cat /etc/ssl/certs/51391683.0                                                                   │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:04 UTC │ 21 Nov 25 14:04 UTC │
	│ ssh            │ functional-179014 ssh sudo cat /etc/ssl/certs/145422.pem                                                                   │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:04 UTC │ 21 Nov 25 14:04 UTC │
	│ ssh            │ functional-179014 ssh sudo cat /usr/share/ca-certificates/145422.pem                                                       │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:04 UTC │ 21 Nov 25 14:04 UTC │
	│ ssh            │ functional-179014 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                   │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:04 UTC │ 21 Nov 25 14:04 UTC │
	│ start          │ -p functional-179014 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                  │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:04 UTC │                     │
	│ start          │ -p functional-179014 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                            │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:04 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-179014 --alsologtostderr -v=1                                                             │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:04 UTC │ 21 Nov 25 14:04 UTC │
	│ cp             │ functional-179014 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:04 UTC │ 21 Nov 25 14:04 UTC │
	│ ssh            │ functional-179014 ssh -n functional-179014 sudo cat /home/docker/cp-test.txt                                               │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:04 UTC │ 21 Nov 25 14:04 UTC │
	│ cp             │ functional-179014 cp functional-179014:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2841785318/001/cp-test.txt │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:04 UTC │ 21 Nov 25 14:04 UTC │
	│ ssh            │ functional-179014 ssh -n functional-179014 sudo cat /home/docker/cp-test.txt                                               │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:04 UTC │ 21 Nov 25 14:04 UTC │
	│ cp             │ functional-179014 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:04 UTC │ 21 Nov 25 14:04 UTC │
	│ ssh            │ functional-179014 ssh -n functional-179014 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:04 UTC │ 21 Nov 25 14:04 UTC │
	│ image          │ functional-179014 image ls --format short --alsologtostderr                                                                │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:04 UTC │ 21 Nov 25 14:04 UTC │
	│ image          │ functional-179014 image ls --format yaml --alsologtostderr                                                                 │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:04 UTC │ 21 Nov 25 14:04 UTC │
	│ ssh            │ functional-179014 ssh pgrep buildkitd                                                                                      │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:04 UTC │                     │
	│ image          │ functional-179014 image build -t localhost/my-image:functional-179014 testdata/build --alsologtostderr                     │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:04 UTC │ 21 Nov 25 14:05 UTC │
	│ image          │ functional-179014 image ls                                                                                                 │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:05 UTC │ 21 Nov 25 14:05 UTC │
	│ image          │ functional-179014 image ls --format json --alsologtostderr                                                                 │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:05 UTC │ 21 Nov 25 14:05 UTC │
	│ image          │ functional-179014 image ls --format table --alsologtostderr                                                                │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:05 UTC │ 21 Nov 25 14:05 UTC │
	│ update-context │ functional-179014 update-context --alsologtostderr -v=2                                                                    │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:05 UTC │ 21 Nov 25 14:05 UTC │
	│ update-context │ functional-179014 update-context --alsologtostderr -v=2                                                                    │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:05 UTC │ 21 Nov 25 14:05 UTC │
	│ update-context │ functional-179014 update-context --alsologtostderr -v=2                                                                    │ functional-179014 │ jenkins │ v1.37.0 │ 21 Nov 25 14:05 UTC │ 21 Nov 25 14:05 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:04:44
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:04:44.581943   52625 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:04:44.582167   52625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:04:44.582175   52625 out.go:374] Setting ErrFile to fd 2...
	I1121 14:04:44.582179   52625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:04:44.582370   52625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:04:44.582771   52625 out.go:368] Setting JSON to false
	I1121 14:04:44.583671   52625 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2834,"bootTime":1763731051,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:04:44.583746   52625 start.go:143] virtualization: kvm guest
	I1121 14:04:44.585056   52625 out.go:179] * [functional-179014] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:04:44.586069   52625 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:04:44.586075   52625 notify.go:221] Checking for updates...
	I1121 14:04:44.587008   52625 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:04:44.588012   52625 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:04:44.588965   52625 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 14:04:44.589937   52625 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:04:44.590876   52625 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:04:44.592471   52625 config.go:182] Loaded profile config "functional-179014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:04:44.593155   52625 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:04:44.616995   52625 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:04:44.617074   52625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:04:44.673747   52625 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-21 14:04:44.664177263 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:04:44.673842   52625 docker.go:319] overlay module found
	I1121 14:04:44.675339   52625 out.go:179] * Using the docker driver based on existing profile
	I1121 14:04:44.676303   52625 start.go:309] selected driver: docker
	I1121 14:04:44.676315   52625 start.go:930] validating driver "docker" against &{Name:functional-179014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-179014 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:04:44.676408   52625 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:04:44.676489   52625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:04:44.730677   52625 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-21 14:04:44.721318199 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:04:44.731495   52625 cni.go:84] Creating CNI manager for ""
	I1121 14:04:44.731555   52625 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:04:44.731651   52625 start.go:353] cluster config:
	{Name:functional-179014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-179014 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:04:44.733091   52625 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 21 14:04:50 functional-179014 crio[3600]: time="2025-11-21T14:04:50.948440198Z" level=info msg="Starting container: 8e4b221809e21f7268899c3d7fcc559ddeed2d3ef2dd7b96c7052e11f2ded4b9" id=09cf4003-cdc8-4888-b37e-0e3b669719b2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:04:50 functional-179014 crio[3600]: time="2025-11-21T14:04:50.949984048Z" level=info msg="Started container" PID=7051 containerID=8e4b221809e21f7268899c3d7fcc559ddeed2d3ef2dd7b96c7052e11f2ded4b9 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-g76lm/kubernetes-dashboard id=09cf4003-cdc8-4888-b37e-0e3b669719b2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e101e01a5d7de13f5d2fa54594c3d0678e25ea4e0877b66e877a18671649115a
	Nov 21 14:04:58 functional-179014 crio[3600]: time="2025-11-21T14:04:58.568199241Z" level=info msg="Pulled image: docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da" id=e1d2c4e4-dec1-4cc3-a971-957731fcb4a7 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:04:58 functional-179014 crio[3600]: time="2025-11-21T14:04:58.56879651Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=d267ea6a-c2b8-4eb3-bc2c-c7f84b5f74e0 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:04:58 functional-179014 crio[3600]: time="2025-11-21T14:04:58.570779651Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=38e123b4-55a4-4c47-931a-77582ebabafa name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:04:58 functional-179014 crio[3600]: time="2025-11-21T14:04:58.574652054Z" level=info msg="Creating container: default/mysql-5bb876957f-8ftp8/mysql" id=53d88692-3d2e-4126-82c3-6f3aa8c89fbb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:04:58 functional-179014 crio[3600]: time="2025-11-21T14:04:58.575019817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:04:58 functional-179014 crio[3600]: time="2025-11-21T14:04:58.584358687Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:04:58 functional-179014 crio[3600]: time="2025-11-21T14:04:58.585221507Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:04:58 functional-179014 crio[3600]: time="2025-11-21T14:04:58.617626754Z" level=info msg="Created container e956870f8ef338f1ebbff14b3db4523d72b15dcf72e4ace4692d7b7f973367fb: default/mysql-5bb876957f-8ftp8/mysql" id=53d88692-3d2e-4126-82c3-6f3aa8c89fbb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:04:58 functional-179014 crio[3600]: time="2025-11-21T14:04:58.618184732Z" level=info msg="Starting container: e956870f8ef338f1ebbff14b3db4523d72b15dcf72e4ace4692d7b7f973367fb" id=4879ceaa-f05d-4b60-9274-b178ae37be63 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:04:58 functional-179014 crio[3600]: time="2025-11-21T14:04:58.619889269Z" level=info msg="Started container" PID=7494 containerID=e956870f8ef338f1ebbff14b3db4523d72b15dcf72e4ace4692d7b7f973367fb description=default/mysql-5bb876957f-8ftp8/mysql id=4879ceaa-f05d-4b60-9274-b178ae37be63 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2b8709aa1cea3e184cc0581c92fb9969be284121db48f1dd3f11ae0c4f85243f
	Nov 21 14:05:06 functional-179014 crio[3600]: time="2025-11-21T14:05:06.501865326Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2a7c027a-ef9e-44db-9997-4ebe9e71bb77 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:05:07 functional-179014 crio[3600]: time="2025-11-21T14:05:07.501698053Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ef1044af-f81c-4ed7-9245-a43beeabe7f9 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:05:30 functional-179014 crio[3600]: time="2025-11-21T14:05:30.499627661Z" level=info msg="Stopping pod sandbox: a2c193360e8156e153c3bfe91ad3e06632b54c5e907adf7ff6e152f3dcc87788" id=48b46878-bf13-4bc9-8a7c-0db0d7754f96 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 14:05:30 functional-179014 crio[3600]: time="2025-11-21T14:05:30.499692297Z" level=info msg="Stopped pod sandbox (already stopped): a2c193360e8156e153c3bfe91ad3e06632b54c5e907adf7ff6e152f3dcc87788" id=48b46878-bf13-4bc9-8a7c-0db0d7754f96 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 14:05:30 functional-179014 crio[3600]: time="2025-11-21T14:05:30.500046579Z" level=info msg="Removing pod sandbox: a2c193360e8156e153c3bfe91ad3e06632b54c5e907adf7ff6e152f3dcc87788" id=21c29ed0-a006-4a1e-b192-18c6d272f8d2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 21 14:05:30 functional-179014 crio[3600]: time="2025-11-21T14:05:30.503101894Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 21 14:05:30 functional-179014 crio[3600]: time="2025-11-21T14:05:30.503156529Z" level=info msg="Removed pod sandbox: a2c193360e8156e153c3bfe91ad3e06632b54c5e907adf7ff6e152f3dcc87788" id=21c29ed0-a006-4a1e-b192-18c6d272f8d2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 21 14:05:57 functional-179014 crio[3600]: time="2025-11-21T14:05:57.501985719Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b3ca55c9-d762-4b69-9e58-cc9f5e931ff4 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:05:59 functional-179014 crio[3600]: time="2025-11-21T14:05:59.501225353Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=46b87631-89e3-4ee2-8c42-cc5690dc3740 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:07:24 functional-179014 crio[3600]: time="2025-11-21T14:07:24.501666539Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4ae5f940-305f-453d-ae86-6b0c5e1437a4 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:07:28 functional-179014 crio[3600]: time="2025-11-21T14:07:28.501827206Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e42261b8-3dbf-485e-8b17-44e5de3973c7 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:10:14 functional-179014 crio[3600]: time="2025-11-21T14:10:14.501044119Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=dc21138e-3e59-4572-874d-64099bee836b name=/runtime.v1.ImageService/PullImage
	Nov 21 14:10:19 functional-179014 crio[3600]: time="2025-11-21T14:10:19.501864633Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b429f2b8-e065-42d0-b490-304995e007ff name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	e956870f8ef33       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   2b8709aa1cea3       mysql-5bb876957f-8ftp8                       default
	8e4b221809e21       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   e101e01a5d7de       kubernetes-dashboard-855c9754f9-g76lm        kubernetes-dashboard
	7603b180ec56d       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   8cd6519e0ec61       dashboard-metrics-scraper-77bf4d6c4c-svrzs   kubernetes-dashboard
	5ab5405849a62       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  9 minutes ago       Running             myfrontend                  0                   fec9af208806e       sp-pod                                       default
	d5f0b2aa2ca2e       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   8c04f61484073       busybox-mount                                default
	3674ed01d14ac       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   17969bd37b4af       nginx-svc                                    default
	fc77d2faf90eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   5039f6b198f4c       storage-provisioner                          kube-system
	59debf0742b1b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   a805cdcca7a3c       kube-apiserver-functional-179014             kube-system
	8e7112a74d73d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     1                   94d4cf480de97       kube-controller-manager-functional-179014    kube-system
	df6390294bab9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   d14927cdb2a1b       kube-scheduler-functional-179014             kube-system
	dc30781d6411d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   3f58e7bc6c644       etcd-functional-179014                       kube-system
	943edd36f98a5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   acbe55661a773       kindnet-2mx82                                kube-system
	d91cf253b4f5b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   758244a519b5c       coredns-66bc5c9577-86pj8                     kube-system
	5fd8f2d24cf79       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         1                   5039f6b198f4c       storage-provisioner                          kube-system
	5c771bd775018       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Running             kube-proxy                  1                   848ca7b2c6584       kube-proxy-5bt4q                             kube-system
	c1f7010e7763d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   758244a519b5c       coredns-66bc5c9577-86pj8                     kube-system
	6ab03492117dc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 12 minutes ago      Exited              kindnet-cni                 0                   acbe55661a773       kindnet-2mx82                                kube-system
	0a2ea401cb211       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 12 minutes ago      Exited              kube-proxy                  0                   848ca7b2c6584       kube-proxy-5bt4q                             kube-system
	1d2c5dc715b1a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 12 minutes ago      Exited              kube-controller-manager     0                   94d4cf480de97       kube-controller-manager-functional-179014    kube-system
	3ae744f3cc2b5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 12 minutes ago      Exited              kube-scheduler              0                   d14927cdb2a1b       kube-scheduler-functional-179014             kube-system
	3b8bce5e0ed9a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 12 minutes ago      Exited              etcd                        0                   3f58e7bc6c644       etcd-functional-179014                       kube-system
	
	
	==> coredns [c1f7010e7763daf0013ef0f04a2d09ea1dd3908f05fb42e937f1ac707704e1c1] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46007 - 12342 "HINFO IN 2949170969962402983.3315952032360394729. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.066630445s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d91cf253b4f5b460c6eca64b8924f8aa1d74e01cb96bdb4f35a9cd85bb432e71] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40851 - 30312 "HINFO IN 6202291274040397738.2641656625492293525. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.483649605s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-179014
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-179014
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=functional-179014
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_02_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:02:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-179014
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:14:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:13:14 +0000   Fri, 21 Nov 2025 14:02:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:13:14 +0000   Fri, 21 Nov 2025 14:02:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:13:14 +0000   Fri, 21 Nov 2025 14:02:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:13:14 +0000   Fri, 21 Nov 2025 14:02:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-179014
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                fa14508d-74dd-44de-94e6-0cd4074cb93d
	  Boot ID:                    4deed74e-d1ae-403f-b303-7338c681df31
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-hkgzj                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m58s
	  default                     hello-node-connect-7d85dfc575-6xn2m           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-8ftp8                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m38s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  kube-system                 coredns-66bc5c9577-86pj8                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-179014                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-2mx82                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-179014              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-179014     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-5bt4q                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-179014              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-svrzs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m41s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-g76lm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-179014 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-179014 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-179014 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-179014 event: Registered Node functional-179014 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-179014 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-179014 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-179014 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-179014 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-179014 event: Registered Node functional-179014 in Controller
	
	
	==> dmesg <==
	[  +0.087005] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024870] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.407014] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 13:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.052324] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023964] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +2.047715] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +4.031557] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +8.511113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[ +16.382337] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[Nov21 13:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	
	
	==> etcd [3b8bce5e0ed9ae94de175479d62c3d3937bfda0f38507e20625e59a0d1cda239] <==
	{"level":"warn","ts":"2025-11-21T14:02:04.871586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:02:04.878748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:02:04.884622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:02:04.895401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:02:04.901081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:02:04.907935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:02:04.948988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41078","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-21T14:03:29.196293Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-21T14:03:29.196371Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-179014","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-21T14:03:29.196465Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-21T14:03:29.197950Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-21T14:03:29.199278Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-21T14:03:29.199311Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-21T14:03:29.199362Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-21T14:03:29.199373Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-21T14:03:29.199361Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-21T14:03:29.199397Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-21T14:03:29.199413Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-21T14:03:29.199361Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-21T14:03:29.199450Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-21T14:03:29.199463Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-21T14:03:29.201505Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-21T14:03:29.201583Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-21T14:03:29.201610Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-21T14:03:29.201626Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-179014","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [dc30781d6411de75fdb8d3a0a226ade632b1d8d95c142cc66ff2bff0e1411510] <==
	{"level":"warn","ts":"2025-11-21T14:03:52.234444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:03:52.241690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:03:52.253651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:03:52.260280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:03:52.265955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:03:52.271843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:03:52.278612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:03:52.286718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:03:52.292712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:03:52.299432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:03:52.311429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:03:52.317283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:03:52.323105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:03:52.338450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:03:52.344696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:03:52.351311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:03:52.406926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35508","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-21T14:04:59.627275Z","caller":"traceutil/trace.go:172","msg":"trace[1448741269] transaction","detail":"{read_only:false; response_revision:839; number_of_response:1; }","duration":"178.812232ms","start":"2025-11-21T14:04:59.448439Z","end":"2025-11-21T14:04:59.627251Z","steps":["trace[1448741269] 'process raft request'  (duration: 178.636329ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T14:05:04.124334Z","caller":"traceutil/trace.go:172","msg":"trace[362565768] linearizableReadLoop","detail":"{readStateIndex:913; appliedIndex:913; }","duration":"173.5413ms","start":"2025-11-21T14:05:03.950777Z","end":"2025-11-21T14:05:04.124318Z","steps":["trace[362565768] 'read index received'  (duration: 173.535551ms)","trace[362565768] 'applied index is now lower than readState.Index'  (duration: 4.889µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T14:05:04.124503Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"173.714829ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.49.2\" limit:1 ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-11-21T14:05:04.124550Z","caller":"traceutil/trace.go:172","msg":"trace[2062482857] transaction","detail":"{read_only:false; response_revision:843; number_of_response:1; }","duration":"208.695194ms","start":"2025-11-21T14:05:03.915838Z","end":"2025-11-21T14:05:04.124533Z","steps":["trace[2062482857] 'process raft request'  (duration: 208.596878ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T14:05:04.124584Z","caller":"traceutil/trace.go:172","msg":"trace[422450583] range","detail":"{range_begin:/registry/masterleases/192.168.49.2; range_end:; response_count:1; response_revision:842; }","duration":"173.807731ms","start":"2025-11-21T14:05:03.950765Z","end":"2025-11-21T14:05:04.124572Z","steps":["trace[422450583] 'agreement among raft nodes before linearized reading'  (duration: 173.628798ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T14:13:51.917006Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1114}
	{"level":"info","ts":"2025-11-21T14:13:51.935876Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1114,"took":"18.541904ms","hash":3919794909,"current-db-size-bytes":3334144,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1507328,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-11-21T14:13:51.935912Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3919794909,"revision":1114,"compact-revision":-1}
	
	
	==> kernel <==
	 14:14:26 up 56 min,  0 user,  load average: 0.02, 0.23, 0.38
	Linux functional-179014 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6ab03492117dc9100613797849dff77c09e5091b815bfb669b5194ace163b1ca] <==
	I1121 14:02:13.982242       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:02:13.982476       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1121 14:02:13.982617       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:02:13.982633       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:02:13.982653       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:02:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:02:14.180422       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:02:14.180497       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:02:14.180513       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:02:14.181127       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 14:02:44.182028       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1121 14:02:44.182033       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1121 14:02:44.182089       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 14:02:44.182224       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1121 14:02:45.381003       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:02:45.381026       1 metrics.go:72] Registering metrics
	I1121 14:02:45.381092       1 controller.go:711] "Syncing nftables rules"
	I1121 14:02:54.185195       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:02:54.185228       1 main.go:301] handling current node
	I1121 14:03:04.188931       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:03:04.188961       1 main.go:301] handling current node
	I1121 14:03:14.184651       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:03:14.184686       1 main.go:301] handling current node
	
	
	==> kindnet [943edd36f98a54f78435eff189b4c089b218174c6c0a588a555955a34947a993] <==
	I1121 14:12:19.473719       1 main.go:301] handling current node
	I1121 14:12:29.482227       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:12:29.482257       1 main.go:301] handling current node
	I1121 14:12:39.474657       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:12:39.474689       1 main.go:301] handling current node
	I1121 14:12:49.482792       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:12:49.482822       1 main.go:301] handling current node
	I1121 14:12:59.481646       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:12:59.481676       1 main.go:301] handling current node
	I1121 14:13:09.474429       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:13:09.474479       1 main.go:301] handling current node
	I1121 14:13:19.473942       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:13:19.473971       1 main.go:301] handling current node
	I1121 14:13:29.482086       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:13:29.482118       1 main.go:301] handling current node
	I1121 14:13:39.474156       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:13:39.474188       1 main.go:301] handling current node
	I1121 14:13:49.473891       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:13:49.473921       1 main.go:301] handling current node
	I1121 14:13:59.479939       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:13:59.479969       1 main.go:301] handling current node
	I1121 14:14:09.473682       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:14:09.473710       1 main.go:301] handling current node
	I1121 14:14:19.474435       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 14:14:19.474471       1 main.go:301] handling current node
	
	
	==> kube-apiserver [59debf0742b1b69719847199182fa024c07159f7eca6e5eda4f34f64a7d9b5be] <==
	I1121 14:03:52.868318       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:03:53.666315       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:03:53.746319       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1121 14:03:53.950316       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1121 14:03:53.951443       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:03:53.955942       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:03:54.320625       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:03:54.402312       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:03:54.442286       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:03:54.447095       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:03:56.188759       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:04:18.631005       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.227.121"}
	I1121 14:04:23.875886       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.170.39"}
	I1121 14:04:24.708620       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.105.236.139"}
	I1121 14:04:28.726922       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.107.208.95"}
	E1121 14:04:39.723482       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45652: use of closed network connection
	I1121 14:04:45.508858       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 14:04:45.596130       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.66.5"}
	I1121 14:04:45.612650       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.251.200"}
	E1121 14:04:47.908913       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:33524: use of closed network connection
	I1121 14:04:48.040057       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.98.176.127"}
	E1121 14:05:04.207675       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39250: use of closed network connection
	E1121 14:05:05.345774       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37358: use of closed network connection
	E1121 14:05:07.428374       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37382: use of closed network connection
	I1121 14:13:52.769118       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [1d2c5dc715b1aa6a5d8eb796923dd82a3f22008380ff1dde78a5fc483a2c6d48] <==
	I1121 14:02:12.313672       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 14:02:12.313695       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 14:02:12.313721       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:02:12.313728       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 14:02:12.313751       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 14:02:12.313775       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 14:02:12.313794       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 14:02:12.314046       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 14:02:12.314670       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 14:02:12.316059       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 14:02:12.316107       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 14:02:12.317190       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 14:02:12.318305       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 14:02:12.318913       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:02:12.319822       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:02:12.319840       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1121 14:02:12.319883       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1121 14:02:12.319919       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1121 14:02:12.319925       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1121 14:02:12.319929       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1121 14:02:12.324100       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1121 14:02:12.324952       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-179014" podCIDRs=["10.244.0.0/24"]
	I1121 14:02:12.329128       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1121 14:02:12.333364       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:02:57.264485       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [8e7112a74d73de6abcb6420512a6dc98e54bc7af743e30310b041540f53df407] <==
	I1121 14:03:56.184604       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 14:03:56.184628       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1121 14:03:56.184594       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 14:03:56.185803       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 14:03:56.185823       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1121 14:03:56.185887       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 14:03:56.185909       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 14:03:56.185913       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 14:03:56.185891       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1121 14:03:56.190548       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:03:56.190591       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:03:56.190603       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:03:56.190612       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 14:03:56.190625       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 14:03:56.193514       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 14:03:56.195795       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:03:56.198032       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:03:56.201242       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1121 14:03:56.203456       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1121 14:04:45.549162       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1121 14:04:45.553347       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1121 14:04:45.554896       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1121 14:04:45.557342       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1121 14:04:45.559164       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1121 14:04:45.563397       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [0a2ea401cb2116bd9bb6db7a98499b1065ccc6967a0c5af53802d11dfc9e870c] <==
	I1121 14:02:13.846491       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:02:13.908114       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:02:14.008528       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:02:14.008574       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1121 14:02:14.008634       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:02:14.025604       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:02:14.025656       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:02:14.030806       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:02:14.031155       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:02:14.031178       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:02:14.032529       1 config.go:200] "Starting service config controller"
	I1121 14:02:14.032589       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:02:14.032613       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:02:14.032618       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:02:14.032682       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:02:14.032699       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:02:14.033230       1 config.go:309] "Starting node config controller"
	I1121 14:02:14.033281       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:02:14.033293       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:02:14.132718       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:02:14.132745       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:02:14.132746       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [5c771bd7750183f1e939997e13795d4da33cd690a8349bb34dc47fb320c6391d] <==
	E1121 14:03:19.099947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-179014&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:03:19.978172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-179014&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:03:23.077324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-179014&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:03:27.858020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-179014&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:03:47.780377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-179014&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1121 14:04:07.899293       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:04:07.899319       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1121 14:04:07.899399       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:04:07.917258       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:04:07.917303       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:04:07.922507       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:04:07.922844       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:04:07.922871       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:04:07.923924       1 config.go:200] "Starting service config controller"
	I1121 14:04:07.923943       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:04:07.923954       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:04:07.923956       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:04:07.923983       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:04:07.923989       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:04:07.924142       1 config.go:309] "Starting node config controller"
	I1121 14:04:07.924163       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:04:07.924172       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:04:08.024842       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:04:08.024872       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:04:08.024900       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [3ae744f3cc2b59dd59d49d63c7d1cc5456ffb4eab47a653418e6b2d0ec0597dc] <==
	E1121 14:02:05.340891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:02:05.340878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:02:05.340713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:02:05.340950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:02:05.340974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:02:05.340990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:02:05.341020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:02:06.169288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:02:06.214184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:02:06.260382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:02:06.280240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:02:06.354117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 14:02:06.354211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:02:06.441800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1121 14:02:06.474022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:02:06.490001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:02:06.537869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:02:06.545750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1121 14:02:08.736722       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:03:29.086915       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1121 14:03:29.087020       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1121 14:03:29.087047       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1121 14:03:29.087053       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:03:29.087100       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1121 14:03:29.087140       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [df6390294bab954592af505d8da7def6e5ebab6bcf4894d72e9ddc8882c9d608] <==
	I1121 14:03:51.705590       1 serving.go:386] Generated self-signed cert in-memory
	W1121 14:03:52.757198       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1121 14:03:52.757248       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1121 14:03:52.757264       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1121 14:03:52.757274       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1121 14:03:52.781779       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 14:03:52.781808       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:03:52.784355       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:03:52.784389       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:03:52.784780       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 14:03:52.785197       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 14:03:52.885056       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:11:43 functional-179014 kubelet[4153]: E1121 14:11:43.501408    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hkgzj" podUID="9009aa16-13b4-4d22-bae9-e31a6f3bbc33"
	Nov 21 14:11:46 functional-179014 kubelet[4153]: E1121 14:11:46.501655    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6xn2m" podUID="650cb18b-160a-4099-8353-48a47ac5d9f3"
	Nov 21 14:11:58 functional-179014 kubelet[4153]: E1121 14:11:58.501000    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hkgzj" podUID="9009aa16-13b4-4d22-bae9-e31a6f3bbc33"
	Nov 21 14:11:59 functional-179014 kubelet[4153]: E1121 14:11:59.500848    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6xn2m" podUID="650cb18b-160a-4099-8353-48a47ac5d9f3"
	Nov 21 14:12:09 functional-179014 kubelet[4153]: E1121 14:12:09.501463    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hkgzj" podUID="9009aa16-13b4-4d22-bae9-e31a6f3bbc33"
	Nov 21 14:12:12 functional-179014 kubelet[4153]: E1121 14:12:12.501019    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6xn2m" podUID="650cb18b-160a-4099-8353-48a47ac5d9f3"
	Nov 21 14:12:24 functional-179014 kubelet[4153]: E1121 14:12:24.501431    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hkgzj" podUID="9009aa16-13b4-4d22-bae9-e31a6f3bbc33"
	Nov 21 14:12:25 functional-179014 kubelet[4153]: E1121 14:12:25.501138    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6xn2m" podUID="650cb18b-160a-4099-8353-48a47ac5d9f3"
	Nov 21 14:12:36 functional-179014 kubelet[4153]: E1121 14:12:36.503050    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hkgzj" podUID="9009aa16-13b4-4d22-bae9-e31a6f3bbc33"
	Nov 21 14:12:39 functional-179014 kubelet[4153]: E1121 14:12:39.501219    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6xn2m" podUID="650cb18b-160a-4099-8353-48a47ac5d9f3"
	Nov 21 14:12:51 functional-179014 kubelet[4153]: E1121 14:12:51.500743    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hkgzj" podUID="9009aa16-13b4-4d22-bae9-e31a6f3bbc33"
	Nov 21 14:12:52 functional-179014 kubelet[4153]: E1121 14:12:52.501494    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6xn2m" podUID="650cb18b-160a-4099-8353-48a47ac5d9f3"
	Nov 21 14:13:05 functional-179014 kubelet[4153]: E1121 14:13:05.501358    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6xn2m" podUID="650cb18b-160a-4099-8353-48a47ac5d9f3"
	Nov 21 14:13:06 functional-179014 kubelet[4153]: E1121 14:13:06.501269    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hkgzj" podUID="9009aa16-13b4-4d22-bae9-e31a6f3bbc33"
	Nov 21 14:13:20 functional-179014 kubelet[4153]: E1121 14:13:20.501324    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6xn2m" podUID="650cb18b-160a-4099-8353-48a47ac5d9f3"
	Nov 21 14:13:21 functional-179014 kubelet[4153]: E1121 14:13:21.501585    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hkgzj" podUID="9009aa16-13b4-4d22-bae9-e31a6f3bbc33"
	Nov 21 14:13:31 functional-179014 kubelet[4153]: E1121 14:13:31.501016    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6xn2m" podUID="650cb18b-160a-4099-8353-48a47ac5d9f3"
	Nov 21 14:13:32 functional-179014 kubelet[4153]: E1121 14:13:32.500871    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hkgzj" podUID="9009aa16-13b4-4d22-bae9-e31a6f3bbc33"
	Nov 21 14:13:43 functional-179014 kubelet[4153]: E1121 14:13:43.501610    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6xn2m" podUID="650cb18b-160a-4099-8353-48a47ac5d9f3"
	Nov 21 14:13:46 functional-179014 kubelet[4153]: E1121 14:13:46.500732    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hkgzj" podUID="9009aa16-13b4-4d22-bae9-e31a6f3bbc33"
	Nov 21 14:13:55 functional-179014 kubelet[4153]: E1121 14:13:55.500752    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6xn2m" podUID="650cb18b-160a-4099-8353-48a47ac5d9f3"
	Nov 21 14:13:59 functional-179014 kubelet[4153]: E1121 14:13:59.501405    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hkgzj" podUID="9009aa16-13b4-4d22-bae9-e31a6f3bbc33"
	Nov 21 14:14:07 functional-179014 kubelet[4153]: E1121 14:14:07.501478    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6xn2m" podUID="650cb18b-160a-4099-8353-48a47ac5d9f3"
	Nov 21 14:14:14 functional-179014 kubelet[4153]: E1121 14:14:14.501154    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hkgzj" podUID="9009aa16-13b4-4d22-bae9-e31a6f3bbc33"
	Nov 21 14:14:20 functional-179014 kubelet[4153]: E1121 14:14:20.501229    4153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6xn2m" podUID="650cb18b-160a-4099-8353-48a47ac5d9f3"
	
	
	==> kubernetes-dashboard [8e4b221809e21f7268899c3d7fcc559ddeed2d3ef2dd7b96c7052e11f2ded4b9] <==
	2025/11/21 14:04:50 Using namespace: kubernetes-dashboard
	2025/11/21 14:04:50 Using in-cluster config to connect to apiserver
	2025/11/21 14:04:50 Using secret token for csrf signing
	2025/11/21 14:04:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/21 14:04:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/21 14:04:50 Successful initial request to the apiserver, version: v1.34.1
	2025/11/21 14:04:50 Generating JWE encryption key
	2025/11/21 14:04:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/21 14:04:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/21 14:04:51 Initializing JWE encryption key from synchronized object
	2025/11/21 14:04:51 Creating in-cluster Sidecar client
	2025/11/21 14:04:51 Successful request to sidecar
	2025/11/21 14:04:51 Serving insecurely on HTTP port: 9090
	2025/11/21 14:04:50 Starting overwatch
	
	
	==> storage-provisioner [5fd8f2d24cf79fbe014cd82bc48cd82a7ac0984e3c85113cc8ea5383dcb553a0] <==
	I1121 14:03:19.008396       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1121 14:03:19.011391       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [fc77d2faf90eb5f26cca217aba375e402ccf58f0e298305e1ab48d24f12abd69] <==
	W1121 14:14:01.385398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:03.388068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:03.391902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:05.394714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:05.398265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:07.400922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:07.404206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:09.407255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:09.411913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:11.414544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:11.418187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:13.420729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:13.425239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:15.427905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:15.431288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:17.434117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:17.438648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:19.441465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:19.445089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:21.448129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:21.451470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:23.453986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:23.457484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:25.460067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:14:25.463872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-179014 -n functional-179014
helpers_test.go:269: (dbg) Run:  kubectl --context functional-179014 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-hkgzj hello-node-connect-7d85dfc575-6xn2m
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-179014 describe pod busybox-mount hello-node-75c85bcc94-hkgzj hello-node-connect-7d85dfc575-6xn2m
helpers_test.go:290: (dbg) kubectl --context functional-179014 describe pod busybox-mount hello-node-75c85bcc94-hkgzj hello-node-connect-7d85dfc575-6xn2m:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-179014/192.168.49.2
	Start Time:       Fri, 21 Nov 2025 14:04:34 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://d5f0b2aa2ca2e91e13a3f5015d2d6fe42bfcd1d59eb2250f96bdf99c03968768
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 21 Nov 2025 14:04:35 +0000
	      Finished:     Fri, 21 Nov 2025 14:04:35 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xmfxt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-xmfxt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m53s  default-scheduler  Successfully assigned default/busybox-mount to functional-179014
	  Normal  Pulling    9m53s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m52s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 699ms (699ms including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m52s  kubelet            Created container: mount-munger
	  Normal  Started    9m52s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-hkgzj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-179014/192.168.49.2
	Start Time:       Fri, 21 Nov 2025 14:04:28 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-59vrd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-59vrd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m58s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hkgzj to functional-179014
	  Normal   Pulling    6m59s (x5 over 9m59s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m59s (x5 over 9m59s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m59s (x5 over 9m59s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m48s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m48s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-6xn2m
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-179014/192.168.49.2
	Start Time:       Fri, 21 Nov 2025 14:04:24 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r6w59 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-r6w59:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6xn2m to functional-179014
	  Normal   Pulling    7m3s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m3s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m3s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m51s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m51s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-179014 image ls --format short --alsologtostderr: (2.304643191s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-179014 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-179014 image ls --format short --alsologtostderr:
I1121 14:04:55.334460   54080 out.go:360] Setting OutFile to fd 1 ...
I1121 14:04:55.334737   54080 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:04:55.334749   54080 out.go:374] Setting ErrFile to fd 2...
I1121 14:04:55.334755   54080 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:04:55.335071   54080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
I1121 14:04:55.335828   54080 config.go:182] Loaded profile config "functional-179014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 14:04:55.335977   54080 config.go:182] Loaded profile config "functional-179014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 14:04:55.336514   54080 cli_runner.go:164] Run: docker container inspect functional-179014 --format={{.State.Status}}
I1121 14:04:55.358666   54080 ssh_runner.go:195] Run: systemctl --version
I1121 14:04:55.358868   54080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-179014
I1121 14:04:55.381222   54080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/functional-179014/id_rsa Username:docker}
I1121 14:04:55.483610   54080 ssh_runner.go:195] Run: sudo crictl images --output json
I1121 14:04:57.513122   54080 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.029473158s)
W1121 14:04:57.513220   54080 cache_images.go:736] Failed to list images for profile functional-179014 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1121 14:04:57.510376    7304 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="image:{}"
time="2025-11-21T14:04:57Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL"
functional_test.go:290: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 image load --daemon kicbase/echo-server:functional-179014 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-179014" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 image load --daemon kicbase/echo-server:functional-179014 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-179014" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-179014
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 image load --daemon kicbase/echo-server:functional-179014 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-179014" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 image save kicbase/echo-server:functional-179014 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1121 14:04:28.164339   48464 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:04:28.164670   48464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:04:28.164681   48464 out.go:374] Setting ErrFile to fd 2...
	I1121 14:04:28.164686   48464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:04:28.164926   48464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:04:28.165499   48464 config.go:182] Loaded profile config "functional-179014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:04:28.165621   48464 config.go:182] Loaded profile config "functional-179014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:04:28.166014   48464 cli_runner.go:164] Run: docker container inspect functional-179014 --format={{.State.Status}}
	I1121 14:04:28.182976   48464 ssh_runner.go:195] Run: systemctl --version
	I1121 14:04:28.183018   48464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-179014
	I1121 14:04:28.199351   48464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/functional-179014/id_rsa Username:docker}
	I1121 14:04:28.290315   48464 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1121 14:04:28.290364   48464 cache_images.go:255] Failed to load cached images for "functional-179014": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1121 14:04:28.290403   48464 cache_images.go:267] failed pushing to: functional-179014

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-179014
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 image save --daemon kicbase/echo-server:functional-179014 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-179014
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-179014: exit status 1 (16.681304ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-179014

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-179014

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-179014 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-179014 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-hkgzj" [9009aa16-13b4-4d22-bae9-e31a6f3bbc33] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-179014 -n functional-179014
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-21 14:14:29.024220234 +0000 UTC m=+1131.038920195
functional_test.go:1460: (dbg) Run:  kubectl --context functional-179014 describe po hello-node-75c85bcc94-hkgzj -n default
functional_test.go:1460: (dbg) kubectl --context functional-179014 describe po hello-node-75c85bcc94-hkgzj -n default:
Name:             hello-node-75c85bcc94-hkgzj
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-179014/192.168.49.2
Start Time:       Fri, 21 Nov 2025 14:04:28 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-59vrd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-59vrd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hkgzj to functional-179014
Normal   Pulling    7m1s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m1s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m1s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m50s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m50s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-179014 logs hello-node-75c85bcc94-hkgzj -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-179014 logs hello-node-75c85bcc94-hkgzj -n default: exit status 1 (57.057275ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-hkgzj" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-179014 logs hello-node-75c85bcc94-hkgzj -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179014 service --namespace=default --https --url hello-node: exit status 115 (514.595116ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32422
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-179014 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179014 service hello-node --url --format={{.IP}}: exit status 115 (517.462348ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-179014 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179014 service hello-node --url: exit status 115 (511.43826ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32422
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-179014 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32422
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.18s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-196016 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-196016 --output=json --user=testUser: exit status 80 (2.177318458s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"78f08780-1023-4940-83fd-e67e37e0a927","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-196016 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"e37f5fc2-783a-4058-bf66-06d8e8b5db45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-21T14:22:59Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"27abffaa-f02b-43f2-84b4-74169ddeb525","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-196016 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.18s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-196016 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-196016 --output=json --user=testUser: exit status 80 (1.605502309s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1f6be297-b7a6-4890-a8bb-d7993f30a142","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-196016 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"ea65ef65-bbdd-4479-8cdb-5af434c3b718","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-21T14:23:01Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"fc776f0c-d71d-4bf0-8a8b-c1fa68b1e0d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-196016 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.61s)

                                                
                                    
x
+
TestPause/serial/Pause (4.85s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-738756 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-738756 --alsologtostderr -v=5: exit status 80 (1.708541196s)

                                                
                                                
-- stdout --
	* Pausing node pause-738756 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:37:32.806520  236997 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:37:32.806640  236997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:37:32.806650  236997 out.go:374] Setting ErrFile to fd 2...
	I1121 14:37:32.806657  236997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:37:32.806845  236997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:37:32.807085  236997 out.go:368] Setting JSON to false
	I1121 14:37:32.807134  236997 mustload.go:66] Loading cluster: pause-738756
	I1121 14:37:32.807480  236997 config.go:182] Loaded profile config "pause-738756": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:37:32.807887  236997 cli_runner.go:164] Run: docker container inspect pause-738756 --format={{.State.Status}}
	I1121 14:37:32.825133  236997 host.go:66] Checking if "pause-738756" exists ...
	I1121 14:37:32.825366  236997 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:37:32.882500  236997 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-21 14:37:32.872743555 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:37:32.883110  236997 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-738756 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1121 14:37:32.884586  236997 out.go:179] * Pausing node pause-738756 ... 
	I1121 14:37:32.885707  236997 host.go:66] Checking if "pause-738756" exists ...
	I1121 14:37:32.885938  236997 ssh_runner.go:195] Run: systemctl --version
	I1121 14:37:32.885984  236997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-738756
	I1121 14:37:32.902216  236997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/pause-738756/id_rsa Username:docker}
	I1121 14:37:32.994688  236997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:37:33.007616  236997 pause.go:52] kubelet running: true
	I1121 14:37:33.007668  236997 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:37:33.140035  236997 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:37:33.140140  236997 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:37:33.204035  236997 cri.go:89] found id: "ee3fce318e0a45135660390d314eef5598d0f9b44a19c85b18df5bc45372d2b0"
	I1121 14:37:33.204055  236997 cri.go:89] found id: "56d9e1a182ba989709141440de730d7378abf4f126e0dd2e04cc6b0b9983fcc3"
	I1121 14:37:33.204061  236997 cri.go:89] found id: "bcfe29abe072243bf8e5d03b9436f40d0ef8010d13e70df048b62eaab674dfba"
	I1121 14:37:33.204066  236997 cri.go:89] found id: "7e3b49e98729bb65e64035ae4d9e9d363321c322f22b5020b203d05e572e425c"
	I1121 14:37:33.204071  236997 cri.go:89] found id: "8438e531acf65cde01ed3f51f011623b945419737cc2076672ef80edc1bade42"
	I1121 14:37:33.204076  236997 cri.go:89] found id: "7af2de2ff1798680465409699de1fb7a564f52aa9c88d8fbedadd9879d1981ba"
	I1121 14:37:33.204080  236997 cri.go:89] found id: "8aa934ad274990a7ef46badfbc85e87cceabe99a35dd4597de2efe2efb335cbb"
	I1121 14:37:33.204083  236997 cri.go:89] found id: ""
	I1121 14:37:33.204120  236997 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:37:33.215255  236997 retry.go:31] will retry after 287.775366ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:37:33Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:37:33.503729  236997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:37:33.520374  236997 pause.go:52] kubelet running: false
	I1121 14:37:33.520437  236997 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:37:33.672904  236997 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:37:33.673022  236997 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:37:33.753106  236997 cri.go:89] found id: "ee3fce318e0a45135660390d314eef5598d0f9b44a19c85b18df5bc45372d2b0"
	I1121 14:37:33.753129  236997 cri.go:89] found id: "56d9e1a182ba989709141440de730d7378abf4f126e0dd2e04cc6b0b9983fcc3"
	I1121 14:37:33.753135  236997 cri.go:89] found id: "bcfe29abe072243bf8e5d03b9436f40d0ef8010d13e70df048b62eaab674dfba"
	I1121 14:37:33.753141  236997 cri.go:89] found id: "7e3b49e98729bb65e64035ae4d9e9d363321c322f22b5020b203d05e572e425c"
	I1121 14:37:33.753146  236997 cri.go:89] found id: "8438e531acf65cde01ed3f51f011623b945419737cc2076672ef80edc1bade42"
	I1121 14:37:33.753151  236997 cri.go:89] found id: "7af2de2ff1798680465409699de1fb7a564f52aa9c88d8fbedadd9879d1981ba"
	I1121 14:37:33.753156  236997 cri.go:89] found id: "8aa934ad274990a7ef46badfbc85e87cceabe99a35dd4597de2efe2efb335cbb"
	I1121 14:37:33.753160  236997 cri.go:89] found id: ""
	I1121 14:37:33.753215  236997 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:37:33.768044  236997 retry.go:31] will retry after 421.461081ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:37:33Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:37:34.190366  236997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:37:34.207810  236997 pause.go:52] kubelet running: false
	I1121 14:37:34.207865  236997 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:37:34.362221  236997 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:37:34.362291  236997 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:37:34.434798  236997 cri.go:89] found id: "ee3fce318e0a45135660390d314eef5598d0f9b44a19c85b18df5bc45372d2b0"
	I1121 14:37:34.434817  236997 cri.go:89] found id: "56d9e1a182ba989709141440de730d7378abf4f126e0dd2e04cc6b0b9983fcc3"
	I1121 14:37:34.434822  236997 cri.go:89] found id: "bcfe29abe072243bf8e5d03b9436f40d0ef8010d13e70df048b62eaab674dfba"
	I1121 14:37:34.434827  236997 cri.go:89] found id: "7e3b49e98729bb65e64035ae4d9e9d363321c322f22b5020b203d05e572e425c"
	I1121 14:37:34.434832  236997 cri.go:89] found id: "8438e531acf65cde01ed3f51f011623b945419737cc2076672ef80edc1bade42"
	I1121 14:37:34.434836  236997 cri.go:89] found id: "7af2de2ff1798680465409699de1fb7a564f52aa9c88d8fbedadd9879d1981ba"
	I1121 14:37:34.434839  236997 cri.go:89] found id: "8aa934ad274990a7ef46badfbc85e87cceabe99a35dd4597de2efe2efb335cbb"
	I1121 14:37:34.434843  236997 cri.go:89] found id: ""
	I1121 14:37:34.434886  236997 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:37:34.447931  236997 out.go:203] 
	W1121 14:37:34.449025  236997 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:37:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:37:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 14:37:34.449040  236997 out.go:285] * 
	* 
	W1121 14:37:34.454201  236997 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 14:37:34.455690  236997 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-738756 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-738756
helpers_test.go:243: (dbg) docker inspect pause-738756:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1b205bee155e65d38e3b643e92f606dbd8df0de11d9b54efe3560a4f5cdc871e",
	        "Created": "2025-11-21T14:36:48.036828535Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 228030,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:36:48.080046475Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/1b205bee155e65d38e3b643e92f606dbd8df0de11d9b54efe3560a4f5cdc871e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1b205bee155e65d38e3b643e92f606dbd8df0de11d9b54efe3560a4f5cdc871e/hostname",
	        "HostsPath": "/var/lib/docker/containers/1b205bee155e65d38e3b643e92f606dbd8df0de11d9b54efe3560a4f5cdc871e/hosts",
	        "LogPath": "/var/lib/docker/containers/1b205bee155e65d38e3b643e92f606dbd8df0de11d9b54efe3560a4f5cdc871e/1b205bee155e65d38e3b643e92f606dbd8df0de11d9b54efe3560a4f5cdc871e-json.log",
	        "Name": "/pause-738756",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-738756:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-738756",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1b205bee155e65d38e3b643e92f606dbd8df0de11d9b54efe3560a4f5cdc871e",
	                "LowerDir": "/var/lib/docker/overlay2/d6f724eadcbfc07a29479434f01517070c10238bc8a00c09db6548360d21e8b0-init/diff:/var/lib/docker/overlay2/52a35d389e2d282a009a54c52e3f2c3f22a4d7ab5a2644a29d16127d21682576/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d6f724eadcbfc07a29479434f01517070c10238bc8a00c09db6548360d21e8b0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d6f724eadcbfc07a29479434f01517070c10238bc8a00c09db6548360d21e8b0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d6f724eadcbfc07a29479434f01517070c10238bc8a00c09db6548360d21e8b0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-738756",
	                "Source": "/var/lib/docker/volumes/pause-738756/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-738756",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-738756",
	                "name.minikube.sigs.k8s.io": "pause-738756",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5238646e7d40e765eec41c8dcfaf08d80bd2c76fbbde6602795ca4115bfd3540",
	            "SandboxKey": "/var/run/docker/netns/5238646e7d40",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-738756": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2186bb521b00e5c019e854abbbfb8801abf7ffcb6af2eee49c4b81e656682603",
	                    "EndpointID": "2acd6deb0c31df22569fc4c9cceaa6fbaf726b0faef3da3204fa8f9c75cfe9d6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "52:ea:7b:a9:8e:28",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-738756",
	                        "1b205bee155e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-738756 -n pause-738756
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-738756 -n pause-738756: exit status 2 (322.278662ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-738756 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ multinode-384928 ssh -n multinode-384928-m03 sudo cat /home/docker/cp-test_multinode-384928-m02_multinode-384928-m03.txt                                                                                                                      │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ cp      │ multinode-384928 cp testdata/cp-test.txt multinode-384928-m03:/home/docker/cp-test.txt                                                                                                                                                        │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ multinode-384928 ssh -n multinode-384928-m03 sudo cat /home/docker/cp-test.txt                                                                                                                                                                │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ cp      │ multinode-384928 cp multinode-384928-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1537588437/001/cp-test_multinode-384928-m03.txt                                                                                             │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ multinode-384928 ssh -n multinode-384928-m03 sudo cat /home/docker/cp-test.txt                                                                                                                                                                │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ cp      │ multinode-384928 cp multinode-384928-m03:/home/docker/cp-test.txt multinode-384928:/home/docker/cp-test_multinode-384928-m03_multinode-384928.txt                                                                                             │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ multinode-384928 ssh -n multinode-384928-m03 sudo cat /home/docker/cp-test.txt                                                                                                                                                                │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ multinode-384928 ssh -n multinode-384928 sudo cat /home/docker/cp-test_multinode-384928-m03_multinode-384928.txt                                                                                                                              │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ cp      │ multinode-384928 cp multinode-384928-m03:/home/docker/cp-test.txt multinode-384928-m02:/home/docker/cp-test_multinode-384928-m03_multinode-384928-m02.txt                                                                                     │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ multinode-384928 ssh -n multinode-384928-m03 sudo cat /home/docker/cp-test.txt                                                                                                                                                                │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ multinode-384928 ssh -n multinode-384928-m02 sudo cat /home/docker/cp-test_multinode-384928-m03_multinode-384928-m02.txt                                                                                                                      │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ node    │ multinode-384928 node stop m03                                                                                                                                                                                                                │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ node    │ multinode-384928 node start m03 -v=5 --alsologtostderr                                                                                                                                                                                        │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ node    │ list -p multinode-384928                                                                                                                                                                                                                      │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ stop    │ -p multinode-384928                                                                                                                                                                                                                           │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p cert-expiration-046125 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-046125   │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ start   │ -p cert-options-116734 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ delete  │ -p force-systemd-env-653926                                                                                                                                                                                                                   │ force-systemd-env-653926 │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ start   │ -p pause-738756 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                                     │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:37 UTC │
	│ ssh     │ cert-options-116734 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ ssh     │ -p cert-options-116734 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ delete  │ -p cert-options-116734                                                                                                                                                                                                                        │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ start   │ -p old-k8s-version-794941 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │                     │
	│ start   │ -p pause-738756 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:37 UTC │
	│ pause   │ -p pause-738756 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:37:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:37:26.823965  235599 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:37:26.824245  235599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:37:26.824258  235599 out.go:374] Setting ErrFile to fd 2...
	I1121 14:37:26.824264  235599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:37:26.824437  235599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:37:26.824828  235599 out.go:368] Setting JSON to false
	I1121 14:37:26.826000  235599 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4796,"bootTime":1763731051,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:37:26.826089  235599 start.go:143] virtualization: kvm guest
	I1121 14:37:26.828042  235599 out.go:179] * [pause-738756] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:37:26.829391  235599 notify.go:221] Checking for updates...
	I1121 14:37:26.829411  235599 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:37:26.830722  235599 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:37:26.831877  235599 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:37:26.833035  235599 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 14:37:26.833995  235599 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:37:26.834972  235599 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:37:26.836470  235599 config.go:182] Loaded profile config "pause-738756": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:37:26.836964  235599 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:37:26.860069  235599 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:37:26.860143  235599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:37:26.913176  235599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-21 14:37:26.903862876 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:37:26.913268  235599 docker.go:319] overlay module found
	I1121 14:37:26.915825  235599 out.go:179] * Using the docker driver based on existing profile
	I1121 14:37:26.916781  235599 start.go:309] selected driver: docker
	I1121 14:37:26.916794  235599 start.go:930] validating driver "docker" against &{Name:pause-738756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-738756 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:37:26.916951  235599 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:37:26.917025  235599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:37:26.972774  235599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-21 14:37:26.963678484 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:37:26.973535  235599 cni.go:84] Creating CNI manager for ""
	I1121 14:37:26.973619  235599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:37:26.973678  235599 start.go:353] cluster config:
	{Name:pause-738756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-738756 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:37:26.975442  235599 out.go:179] * Starting "pause-738756" primary control-plane node in "pause-738756" cluster
	I1121 14:37:26.976438  235599 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:37:26.977397  235599 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:37:26.978332  235599 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:37:26.978367  235599 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1121 14:37:26.978375  235599 cache.go:65] Caching tarball of preloaded images
	I1121 14:37:26.978434  235599 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:37:26.978452  235599 preload.go:238] Found /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1121 14:37:26.978460  235599 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 14:37:26.978600  235599 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/config.json ...
	I1121 14:37:26.997252  235599 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:37:26.997271  235599 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:37:26.997285  235599 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:37:26.997308  235599 start.go:360] acquireMachinesLock for pause-738756: {Name:mk113b967a7ccc0234ad1b5ee68c8f3782010153 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:37:26.997355  235599 start.go:364] duration metric: took 30.634µs to acquireMachinesLock for "pause-738756"
	I1121 14:37:26.997385  235599 start.go:96] Skipping create...Using existing machine configuration
	I1121 14:37:26.997392  235599 fix.go:54] fixHost starting: 
	I1121 14:37:26.997627  235599 cli_runner.go:164] Run: docker container inspect pause-738756 --format={{.State.Status}}
	I1121 14:37:27.014906  235599 fix.go:112] recreateIfNeeded on pause-738756: state=Running err=<nil>
	W1121 14:37:27.014927  235599 fix.go:138] unexpected machine state, will restart: <nil>
	I1121 14:37:22.203391  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:22.703206  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:23.203686  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:23.703501  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:24.203355  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:24.703180  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:25.203505  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:25.703983  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:26.203646  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:26.703750  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:23.231052  202147 logs.go:123] Gathering logs for kube-controller-manager [830f4be50e039d8a018c60ae0e2cbe357acdd591451f1ac139867871d1d67291] ...
	I1121 14:37:23.231077  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 830f4be50e039d8a018c60ae0e2cbe357acdd591451f1ac139867871d1d67291"
	I1121 14:37:23.258983  202147 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:37:23.259008  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:37:23.305077  202147 logs.go:123] Gathering logs for container status ...
	I1121 14:37:23.305103  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:37:23.333214  202147 logs.go:123] Gathering logs for kubelet ...
	I1121 14:37:23.333234  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:37:23.410624  202147 logs.go:123] Gathering logs for dmesg ...
	I1121 14:37:23.410649  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:37:25.924791  202147 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:37:25.925178  202147 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:37:25.925225  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:37:25.925270  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:37:25.951655  202147 cri.go:89] found id: "5a48743f053dd94ea1c19f18ed0d8b7f694b814d093ea0495600ae2b8d09e855"
	I1121 14:37:25.951675  202147 cri.go:89] found id: ""
	I1121 14:37:25.951683  202147 logs.go:282] 1 containers: [5a48743f053dd94ea1c19f18ed0d8b7f694b814d093ea0495600ae2b8d09e855]
	I1121 14:37:25.951726  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:37:25.955761  202147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:37:25.955820  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:37:25.981218  202147 cri.go:89] found id: ""
	I1121 14:37:25.981235  202147 logs.go:282] 0 containers: []
	W1121 14:37:25.981241  202147 logs.go:284] No container was found matching "etcd"
	I1121 14:37:25.981246  202147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:37:25.981292  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:37:26.006464  202147 cri.go:89] found id: ""
	I1121 14:37:26.006494  202147 logs.go:282] 0 containers: []
	W1121 14:37:26.006504  202147 logs.go:284] No container was found matching "coredns"
	I1121 14:37:26.006510  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:37:26.006548  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:37:26.031428  202147 cri.go:89] found id: "3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:37:26.031447  202147 cri.go:89] found id: ""
	I1121 14:37:26.031458  202147 logs.go:282] 1 containers: [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980]
	I1121 14:37:26.031509  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:37:26.035021  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:37:26.035069  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:37:26.059852  202147 cri.go:89] found id: ""
	I1121 14:37:26.059873  202147 logs.go:282] 0 containers: []
	W1121 14:37:26.059881  202147 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:37:26.059889  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:37:26.059938  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:37:26.084839  202147 cri.go:89] found id: "830f4be50e039d8a018c60ae0e2cbe357acdd591451f1ac139867871d1d67291"
	I1121 14:37:26.084856  202147 cri.go:89] found id: ""
	I1121 14:37:26.084863  202147 logs.go:282] 1 containers: [830f4be50e039d8a018c60ae0e2cbe357acdd591451f1ac139867871d1d67291]
	I1121 14:37:26.084903  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:37:26.088434  202147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:37:26.088475  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:37:26.113030  202147 cri.go:89] found id: ""
	I1121 14:37:26.113047  202147 logs.go:282] 0 containers: []
	W1121 14:37:26.113055  202147 logs.go:284] No container was found matching "kindnet"
	I1121 14:37:26.113062  202147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:37:26.113103  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:37:26.138004  202147 cri.go:89] found id: ""
	I1121 14:37:26.138024  202147 logs.go:282] 0 containers: []
	W1121 14:37:26.138033  202147 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:37:26.138043  202147 logs.go:123] Gathering logs for kubelet ...
	I1121 14:37:26.138055  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:37:26.214421  202147 logs.go:123] Gathering logs for dmesg ...
	I1121 14:37:26.214446  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:37:26.228700  202147 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:37:26.228730  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:37:26.287004  202147 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:37:26.287024  202147 logs.go:123] Gathering logs for kube-apiserver [5a48743f053dd94ea1c19f18ed0d8b7f694b814d093ea0495600ae2b8d09e855] ...
	I1121 14:37:26.287035  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a48743f053dd94ea1c19f18ed0d8b7f694b814d093ea0495600ae2b8d09e855"
	I1121 14:37:26.317758  202147 logs.go:123] Gathering logs for kube-scheduler [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980] ...
	I1121 14:37:26.317781  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:37:26.361940  202147 logs.go:123] Gathering logs for kube-controller-manager [830f4be50e039d8a018c60ae0e2cbe357acdd591451f1ac139867871d1d67291] ...
	I1121 14:37:26.361965  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 830f4be50e039d8a018c60ae0e2cbe357acdd591451f1ac139867871d1d67291"
	I1121 14:37:26.386628  202147 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:37:26.386651  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:37:26.429216  202147 logs.go:123] Gathering logs for container status ...
	I1121 14:37:26.429242  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:37:27.016730  235599 out.go:252] * Updating the running docker "pause-738756" container ...
	I1121 14:37:27.016759  235599 machine.go:94] provisionDockerMachine start ...
	I1121 14:37:27.016833  235599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-738756
	I1121 14:37:27.033998  235599 main.go:143] libmachine: Using SSH client type: native
	I1121 14:37:27.034236  235599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1121 14:37:27.034249  235599 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:37:27.163036  235599 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-738756
	
	I1121 14:37:27.163059  235599 ubuntu.go:182] provisioning hostname "pause-738756"
	I1121 14:37:27.163105  235599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-738756
	I1121 14:37:27.180930  235599 main.go:143] libmachine: Using SSH client type: native
	I1121 14:37:27.181148  235599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1121 14:37:27.181165  235599 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-738756 && echo "pause-738756" | sudo tee /etc/hostname
	I1121 14:37:27.320455  235599 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-738756
	
	I1121 14:37:27.320531  235599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-738756
	I1121 14:37:27.337616  235599 main.go:143] libmachine: Using SSH client type: native
	I1121 14:37:27.337835  235599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1121 14:37:27.337862  235599 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-738756' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-738756/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-738756' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:37:27.466533  235599 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:37:27.466554  235599 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11045/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11045/.minikube}
	I1121 14:37:27.466590  235599 ubuntu.go:190] setting up certificates
	I1121 14:37:27.466603  235599 provision.go:84] configureAuth start
	I1121 14:37:27.466655  235599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-738756
	I1121 14:37:27.484261  235599 provision.go:143] copyHostCerts
	I1121 14:37:27.484336  235599 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem, removing ...
	I1121 14:37:27.484352  235599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem
	I1121 14:37:27.484422  235599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem (1123 bytes)
	I1121 14:37:27.484527  235599 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem, removing ...
	I1121 14:37:27.484535  235599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem
	I1121 14:37:27.484589  235599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem (1679 bytes)
	I1121 14:37:27.484681  235599 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem, removing ...
	I1121 14:37:27.484689  235599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem
	I1121 14:37:27.484716  235599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem (1078 bytes)
	I1121 14:37:27.484797  235599 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem org=jenkins.pause-738756 san=[127.0.0.1 192.168.85.2 localhost minikube pause-738756]
	I1121 14:37:27.922323  235599 provision.go:177] copyRemoteCerts
	I1121 14:37:27.922371  235599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:37:27.922407  235599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-738756
	I1121 14:37:27.939749  235599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/pause-738756/id_rsa Username:docker}
	I1121 14:37:28.035165  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:37:28.052029  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1121 14:37:28.069673  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 14:37:28.085966  235599 provision.go:87] duration metric: took 619.351402ms to configureAuth
	I1121 14:37:28.085990  235599 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:37:28.086154  235599 config.go:182] Loaded profile config "pause-738756": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:37:28.086249  235599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-738756
	I1121 14:37:28.103649  235599 main.go:143] libmachine: Using SSH client type: native
	I1121 14:37:28.103833  235599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1121 14:37:28.103850  235599 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:37:28.421299  235599 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:37:28.421319  235599 machine.go:97] duration metric: took 1.404554171s to provisionDockerMachine
	I1121 14:37:28.421329  235599 start.go:293] postStartSetup for "pause-738756" (driver="docker")
	I1121 14:37:28.421338  235599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:37:28.421406  235599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:37:28.421444  235599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-738756
	I1121 14:37:28.442262  235599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/pause-738756/id_rsa Username:docker}
	I1121 14:37:28.536082  235599 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:37:28.539507  235599 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:37:28.539540  235599 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:37:28.539550  235599 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/addons for local assets ...
	I1121 14:37:28.539625  235599 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/files for local assets ...
	I1121 14:37:28.539703  235599 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem -> 145422.pem in /etc/ssl/certs
	I1121 14:37:28.539784  235599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:37:28.547397  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:37:28.564179  235599 start.go:296] duration metric: took 142.837844ms for postStartSetup
	I1121 14:37:28.564245  235599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:37:28.564299  235599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-738756
	I1121 14:37:28.582495  235599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/pause-738756/id_rsa Username:docker}
	I1121 14:37:28.674349  235599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:37:28.679213  235599 fix.go:56] duration metric: took 1.681814373s for fixHost
	I1121 14:37:28.679236  235599 start.go:83] releasing machines lock for "pause-738756", held for 1.681869567s
	I1121 14:37:28.679300  235599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-738756
	I1121 14:37:28.696780  235599 ssh_runner.go:195] Run: cat /version.json
	I1121 14:37:28.696817  235599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-738756
	I1121 14:37:28.696874  235599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:37:28.696958  235599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-738756
	I1121 14:37:28.715386  235599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/pause-738756/id_rsa Username:docker}
	I1121 14:37:28.716113  235599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/pause-738756/id_rsa Username:docker}
	I1121 14:37:28.809134  235599 ssh_runner.go:195] Run: systemctl --version
	I1121 14:37:28.898271  235599 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:37:28.933187  235599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:37:28.938604  235599 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:37:28.938669  235599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:37:28.946479  235599 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 14:37:28.946499  235599 start.go:496] detecting cgroup driver to use...
	I1121 14:37:28.946532  235599 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:37:28.946592  235599 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:37:28.961473  235599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:37:28.974109  235599 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:37:28.974154  235599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:37:28.988136  235599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:37:28.999500  235599 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:37:29.111079  235599 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:37:29.222433  235599 docker.go:234] disabling docker service ...
	I1121 14:37:29.222491  235599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:37:29.237157  235599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:37:29.249836  235599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:37:29.364799  235599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:37:29.473737  235599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:37:29.485821  235599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:37:29.499382  235599 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 14:37:29.499453  235599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:37:29.508010  235599 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1121 14:37:29.508050  235599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:37:29.516325  235599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:37:29.524319  235599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:37:29.532273  235599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:37:29.539619  235599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:37:29.547812  235599 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:37:29.555426  235599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:37:29.563506  235599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:37:29.570335  235599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:37:29.577626  235599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:37:29.682841  235599 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:37:29.859994  235599 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:37:29.860062  235599 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:37:29.864223  235599 start.go:564] Will wait 60s for crictl version
	I1121 14:37:29.864266  235599 ssh_runner.go:195] Run: which crictl
	I1121 14:37:29.867667  235599 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:37:29.890231  235599 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:37:29.890284  235599 ssh_runner.go:195] Run: crio --version
	I1121 14:37:29.916621  235599 ssh_runner.go:195] Run: crio --version
	I1121 14:37:29.944543  235599 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 14:37:29.945534  235599 cli_runner.go:164] Run: docker network inspect pause-738756 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:37:29.964909  235599 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:37:29.968930  235599 kubeadm.go:884] updating cluster {Name:pause-738756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-738756 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:37:29.969067  235599 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:37:29.969109  235599 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:37:30.001100  235599 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:37:30.001122  235599 crio.go:433] Images already preloaded, skipping extraction
	I1121 14:37:30.001167  235599 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:37:30.025614  235599 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:37:30.025631  235599 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:37:30.025638  235599 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1121 14:37:30.025717  235599 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-738756 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-738756 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:37:30.025777  235599 ssh_runner.go:195] Run: crio config
	I1121 14:37:30.069234  235599 cni.go:84] Creating CNI manager for ""
	I1121 14:37:30.069252  235599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:37:30.069267  235599 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:37:30.069287  235599 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-738756 NodeName:pause-738756 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:37:30.069395  235599 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-738756"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:37:30.069444  235599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:37:30.077645  235599 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:37:30.077697  235599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:37:30.085132  235599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1121 14:37:30.097513  235599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:37:30.109507  235599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1121 14:37:30.121344  235599 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:37:30.124938  235599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:37:30.231736  235599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:37:30.244451  235599 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756 for IP: 192.168.85.2
	I1121 14:37:30.244467  235599 certs.go:195] generating shared ca certs ...
	I1121 14:37:30.244482  235599 certs.go:227] acquiring lock for ca certs: {Name:mkde3a7d6f17b238f06eab3a140993599f1b4367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:37:30.244639  235599 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key
	I1121 14:37:30.244679  235599 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key
	I1121 14:37:30.244689  235599 certs.go:257] generating profile certs ...
	I1121 14:37:30.244771  235599 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/client.key
	I1121 14:37:30.244825  235599 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/apiserver.key.60ca3139
	I1121 14:37:30.244863  235599 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/proxy-client.key
	I1121 14:37:30.244960  235599 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem (1338 bytes)
	W1121 14:37:30.244986  235599 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542_empty.pem, impossibly tiny 0 bytes
	I1121 14:37:30.244995  235599 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:37:30.245017  235599 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:37:30.245044  235599 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:37:30.245066  235599 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem (1679 bytes)
	I1121 14:37:30.245102  235599 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:37:30.245621  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:37:30.264457  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:37:30.282826  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:37:30.299838  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1121 14:37:30.316554  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1121 14:37:30.333890  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:37:30.350043  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:37:30.366133  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1121 14:37:30.382194  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:37:30.398036  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem --> /usr/share/ca-certificates/14542.pem (1338 bytes)
	I1121 14:37:30.415361  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /usr/share/ca-certificates/145422.pem (1708 bytes)
	I1121 14:37:30.431630  235599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:37:30.443274  235599 ssh_runner.go:195] Run: openssl version
	I1121 14:37:30.449049  235599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:37:30.456985  235599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:37:30.460353  235599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:37:30.460400  235599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:37:30.493678  235599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:37:30.501321  235599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14542.pem && ln -fs /usr/share/ca-certificates/14542.pem /etc/ssl/certs/14542.pem"
	I1121 14:37:30.509165  235599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14542.pem
	I1121 14:37:30.512555  235599 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14542.pem
	I1121 14:37:30.512612  235599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14542.pem
	I1121 14:37:30.546740  235599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14542.pem /etc/ssl/certs/51391683.0"
	I1121 14:37:30.554166  235599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145422.pem && ln -fs /usr/share/ca-certificates/145422.pem /etc/ssl/certs/145422.pem"
	I1121 14:37:30.561812  235599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145422.pem
	I1121 14:37:30.565213  235599 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145422.pem
	I1121 14:37:30.565253  235599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145422.pem
	I1121 14:37:30.599774  235599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145422.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:37:30.607616  235599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:37:30.611250  235599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 14:37:30.647947  235599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 14:37:30.683997  235599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 14:37:30.719784  235599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 14:37:30.756431  235599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 14:37:30.792639  235599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 14:37:30.828228  235599 kubeadm.go:401] StartCluster: {Name:pause-738756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-738756 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:37:30.828337  235599 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:37:30.828374  235599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:37:30.854349  235599 cri.go:89] found id: "ee3fce318e0a45135660390d314eef5598d0f9b44a19c85b18df5bc45372d2b0"
	I1121 14:37:30.854370  235599 cri.go:89] found id: "56d9e1a182ba989709141440de730d7378abf4f126e0dd2e04cc6b0b9983fcc3"
	I1121 14:37:30.854377  235599 cri.go:89] found id: "bcfe29abe072243bf8e5d03b9436f40d0ef8010d13e70df048b62eaab674dfba"
	I1121 14:37:30.854382  235599 cri.go:89] found id: "7e3b49e98729bb65e64035ae4d9e9d363321c322f22b5020b203d05e572e425c"
	I1121 14:37:30.854387  235599 cri.go:89] found id: "8438e531acf65cde01ed3f51f011623b945419737cc2076672ef80edc1bade42"
	I1121 14:37:30.854392  235599 cri.go:89] found id: "7af2de2ff1798680465409699de1fb7a564f52aa9c88d8fbedadd9879d1981ba"
	I1121 14:37:30.854396  235599 cri.go:89] found id: "8aa934ad274990a7ef46badfbc85e87cceabe99a35dd4597de2efe2efb335cbb"
	I1121 14:37:30.854403  235599 cri.go:89] found id: ""
	I1121 14:37:30.854436  235599 ssh_runner.go:195] Run: sudo runc list -f json
	W1121 14:37:30.865698  235599 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:37:30Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:37:30.865759  235599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:37:30.873378  235599 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 14:37:30.873391  235599 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 14:37:30.873421  235599 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 14:37:30.880096  235599 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:37:30.880794  235599 kubeconfig.go:125] found "pause-738756" server: "https://192.168.85.2:8443"
	I1121 14:37:30.881578  235599 kapi.go:59] client config for pause-738756: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/client.crt", KeyFile:"/home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/client.key", CAFile:"/home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1121 14:37:30.881947  235599 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1121 14:37:30.881965  235599 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1121 14:37:30.881969  235599 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1121 14:37:30.881974  235599 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1121 14:37:30.881978  235599 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1121 14:37:30.882264  235599 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 14:37:30.889426  235599 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1121 14:37:30.889453  235599 kubeadm.go:602] duration metric: took 16.055847ms to restartPrimaryControlPlane
	I1121 14:37:30.889462  235599 kubeadm.go:403] duration metric: took 61.239917ms to StartCluster
	I1121 14:37:30.889477  235599 settings.go:142] acquiring lock: {Name:mkb207cf001a407898b2dbfd9fb9b3881f173a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:37:30.889542  235599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:37:30.890464  235599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:37:30.890733  235599 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:37:30.890829  235599 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:37:30.890983  235599 config.go:182] Loaded profile config "pause-738756": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:37:30.893122  235599 out.go:179] * Verifying Kubernetes components...
	I1121 14:37:30.893127  235599 out.go:179] * Enabled addons: 
	I1121 14:37:27.203912  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:27.703288  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:28.204101  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:28.703754  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:29.203217  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:29.703744  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:30.203896  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:30.703526  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:31.203639  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:31.703990  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:31.771402  230782 kubeadm.go:1114] duration metric: took 11.652724666s to wait for elevateKubeSystemPrivileges
	I1121 14:37:31.771441  230782 kubeadm.go:403] duration metric: took 21.101889143s to StartCluster
	I1121 14:37:31.771461  230782 settings.go:142] acquiring lock: {Name:mkb207cf001a407898b2dbfd9fb9b3881f173a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:37:31.771573  230782 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:37:31.772968  230782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:37:31.773204  230782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:37:31.773228  230782 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:37:31.773277  230782 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:37:31.773374  230782 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-794941"
	I1121 14:37:31.773394  230782 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-794941"
	I1121 14:37:31.773402  230782 config.go:182] Loaded profile config "old-k8s-version-794941": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1121 14:37:31.773418  230782 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-794941"
	I1121 14:37:31.773424  230782 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-794941"
	I1121 14:37:31.773456  230782 host.go:66] Checking if "old-k8s-version-794941" exists ...
	I1121 14:37:31.773835  230782 cli_runner.go:164] Run: docker container inspect old-k8s-version-794941 --format={{.State.Status}}
	I1121 14:37:31.773937  230782 cli_runner.go:164] Run: docker container inspect old-k8s-version-794941 --format={{.State.Status}}
	I1121 14:37:31.774522  230782 out.go:179] * Verifying Kubernetes components...
	I1121 14:37:31.775805  230782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:37:31.796510  230782 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:37:30.894200  235599 addons.go:530] duration metric: took 3.372439ms for enable addons: enabled=[]
	I1121 14:37:30.894224  235599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:37:31.015208  235599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:37:31.030245  235599 node_ready.go:35] waiting up to 6m0s for node "pause-738756" to be "Ready" ...
	I1121 14:37:31.039055  235599 node_ready.go:49] node "pause-738756" is "Ready"
	I1121 14:37:31.039075  235599 node_ready.go:38] duration metric: took 8.788645ms for node "pause-738756" to be "Ready" ...
	I1121 14:37:31.039087  235599 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:37:31.039124  235599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:37:31.050032  235599 api_server.go:72] duration metric: took 159.263491ms to wait for apiserver process to appear ...
	I1121 14:37:31.050052  235599 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:37:31.050070  235599 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:37:31.053783  235599 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1121 14:37:31.054842  235599 api_server.go:141] control plane version: v1.34.1
	I1121 14:37:31.054861  235599 api_server.go:131] duration metric: took 4.803486ms to wait for apiserver health ...
	I1121 14:37:31.054868  235599 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:37:31.057957  235599 system_pods.go:59] 7 kube-system pods found
	I1121 14:37:31.057984  235599 system_pods.go:61] "coredns-66bc5c9577-tc86b" [07e1e5d6-3bb7-4e12-8bba-fb901718b11e] Running
	I1121 14:37:31.057990  235599 system_pods.go:61] "etcd-pause-738756" [0bf4da64-b88f-4dc0-af80-bb5df7271b6a] Running
	I1121 14:37:31.057994  235599 system_pods.go:61] "kindnet-xjsdb" [2b5a6c88-1f8c-4d52-8efc-72da2ae7668c] Running
	I1121 14:37:31.057998  235599 system_pods.go:61] "kube-apiserver-pause-738756" [1f962bf1-1ac8-4efe-ad8a-95b35c942fb6] Running
	I1121 14:37:31.058002  235599 system_pods.go:61] "kube-controller-manager-pause-738756" [47f89ac6-c579-41b5-beda-dad24ac8b3ef] Running
	I1121 14:37:31.058010  235599 system_pods.go:61] "kube-proxy-4l9nn" [648edea0-a50a-4381-99de-f96747b514f1] Running
	I1121 14:37:31.058014  235599 system_pods.go:61] "kube-scheduler-pause-738756" [39b65a58-441f-41ce-826d-e81aa995ff39] Running
	I1121 14:37:31.058021  235599 system_pods.go:74] duration metric: took 3.147989ms to wait for pod list to return data ...
	I1121 14:37:31.058030  235599 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:37:31.059755  235599 default_sa.go:45] found service account: "default"
	I1121 14:37:31.059769  235599 default_sa.go:55] duration metric: took 1.734863ms for default service account to be created ...
	I1121 14:37:31.059776  235599 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:37:31.061878  235599 system_pods.go:86] 7 kube-system pods found
	I1121 14:37:31.061899  235599 system_pods.go:89] "coredns-66bc5c9577-tc86b" [07e1e5d6-3bb7-4e12-8bba-fb901718b11e] Running
	I1121 14:37:31.061903  235599 system_pods.go:89] "etcd-pause-738756" [0bf4da64-b88f-4dc0-af80-bb5df7271b6a] Running
	I1121 14:37:31.061907  235599 system_pods.go:89] "kindnet-xjsdb" [2b5a6c88-1f8c-4d52-8efc-72da2ae7668c] Running
	I1121 14:37:31.061910  235599 system_pods.go:89] "kube-apiserver-pause-738756" [1f962bf1-1ac8-4efe-ad8a-95b35c942fb6] Running
	I1121 14:37:31.061916  235599 system_pods.go:89] "kube-controller-manager-pause-738756" [47f89ac6-c579-41b5-beda-dad24ac8b3ef] Running
	I1121 14:37:31.061919  235599 system_pods.go:89] "kube-proxy-4l9nn" [648edea0-a50a-4381-99de-f96747b514f1] Running
	I1121 14:37:31.061922  235599 system_pods.go:89] "kube-scheduler-pause-738756" [39b65a58-441f-41ce-826d-e81aa995ff39] Running
	I1121 14:37:31.061928  235599 system_pods.go:126] duration metric: took 2.147436ms to wait for k8s-apps to be running ...
	I1121 14:37:31.061935  235599 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:37:31.061969  235599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:37:31.073453  235599 system_svc.go:56] duration metric: took 11.51265ms WaitForService to wait for kubelet
	I1121 14:37:31.073474  235599 kubeadm.go:587] duration metric: took 182.707582ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:37:31.073492  235599 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:37:31.075180  235599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:37:31.075203  235599 node_conditions.go:123] node cpu capacity is 8
	I1121 14:37:31.075217  235599 node_conditions.go:105] duration metric: took 1.718622ms to run NodePressure ...
	I1121 14:37:31.075230  235599 start.go:242] waiting for startup goroutines ...
	I1121 14:37:31.075241  235599 start.go:247] waiting for cluster config update ...
	I1121 14:37:31.075254  235599 start.go:256] writing updated cluster config ...
	I1121 14:37:31.075536  235599 ssh_runner.go:195] Run: rm -f paused
	I1121 14:37:31.078967  235599 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:37:31.079592  235599 kapi.go:59] client config for pause-738756: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/client.crt", KeyFile:"/home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/client.key", CAFile:"/home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1121 14:37:31.081772  235599 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tc86b" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:31.085112  235599 pod_ready.go:94] pod "coredns-66bc5c9577-tc86b" is "Ready"
	I1121 14:37:31.085129  235599 pod_ready.go:86] duration metric: took 3.340869ms for pod "coredns-66bc5c9577-tc86b" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:31.086620  235599 pod_ready.go:83] waiting for pod "etcd-pause-738756" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:31.089972  235599 pod_ready.go:94] pod "etcd-pause-738756" is "Ready"
	I1121 14:37:31.089989  235599 pod_ready.go:86] duration metric: took 3.351817ms for pod "etcd-pause-738756" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:31.091695  235599 pod_ready.go:83] waiting for pod "kube-apiserver-pause-738756" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:31.094803  235599 pod_ready.go:94] pod "kube-apiserver-pause-738756" is "Ready"
	I1121 14:37:31.094823  235599 pod_ready.go:86] duration metric: took 3.111829ms for pod "kube-apiserver-pause-738756" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:31.096281  235599 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-738756" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:31.483026  235599 pod_ready.go:94] pod "kube-controller-manager-pause-738756" is "Ready"
	I1121 14:37:31.483055  235599 pod_ready.go:86] duration metric: took 386.756153ms for pod "kube-controller-manager-pause-738756" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:31.683291  235599 pod_ready.go:83] waiting for pod "kube-proxy-4l9nn" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:31.797209  230782 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-794941"
	I1121 14:37:31.797255  230782 host.go:66] Checking if "old-k8s-version-794941" exists ...
	I1121 14:37:31.797650  230782 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:37:31.797672  230782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:37:31.797722  230782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-794941
	I1121 14:37:31.797757  230782 cli_runner.go:164] Run: docker container inspect old-k8s-version-794941 --format={{.State.Status}}
	I1121 14:37:31.824362  230782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/old-k8s-version-794941/id_rsa Username:docker}
	I1121 14:37:31.827144  230782 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:37:31.827168  230782 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:37:31.827241  230782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-794941
	I1121 14:37:31.847003  230782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/old-k8s-version-794941/id_rsa Username:docker}
	I1121 14:37:31.859803  230782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:37:31.917999  230782 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:37:31.936937  230782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:37:31.956427  230782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:37:32.073710  230782 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1121 14:37:32.075124  230782 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-794941" to be "Ready" ...
	I1121 14:37:32.300737  230782 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:37:32.084590  235599 pod_ready.go:94] pod "kube-proxy-4l9nn" is "Ready"
	I1121 14:37:32.084615  235599 pod_ready.go:86] duration metric: took 401.298514ms for pod "kube-proxy-4l9nn" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:32.283179  235599 pod_ready.go:83] waiting for pod "kube-scheduler-pause-738756" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:32.683716  235599 pod_ready.go:94] pod "kube-scheduler-pause-738756" is "Ready"
	I1121 14:37:32.683742  235599 pod_ready.go:86] duration metric: took 400.536666ms for pod "kube-scheduler-pause-738756" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:32.683756  235599 pod_ready.go:40] duration metric: took 1.60476816s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:37:32.727107  235599 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:37:32.728780  235599 out.go:179] * Done! kubectl is now configured to use "pause-738756" cluster and "default" namespace by default
	I1121 14:37:28.957192  202147 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	
	
	==> CRI-O <==
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.777516551Z" level=info msg="RDT not available in the host system"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.777524826Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.778264996Z" level=info msg="Conmon does support the --sync option"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.778282408Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.778293949Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.778959942Z" level=info msg="Conmon does support the --sync option"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.778973291Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.782778676Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.782802788Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.783440102Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.783933501Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.783995459Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.855870213Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-tc86b Namespace:kube-system ID:5fe88083690acd038be4fe16e709df436c8218e4f6d9cb2dfb0ebbb59ec1847b UID:07e1e5d6-3bb7-4e12-8bba-fb901718b11e NetNS:/var/run/netns/0564deae-79ad-48af-9c01-3299a3b4fd8b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009040c0}] Aliases:map[]}"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856026127Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-tc86b for CNI network kindnet (type=ptp)"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856409673Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856430453Z" level=info msg="Starting seccomp notifier watcher"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856504371Z" level=info msg="Create NRI interface"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856584529Z" level=info msg="built-in NRI default validator is disabled"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856598591Z" level=info msg="runtime interface created"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856611886Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856619607Z" level=info msg="runtime interface starting up..."
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856626766Z" level=info msg="starting plugins..."
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856637043Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856861369Z" level=info msg="No systemd watchdog enabled"
	Nov 21 14:37:29 pause-738756 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	ee3fce318e0a4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago      Running             coredns                   0                   5fe88083690ac       coredns-66bc5c9577-tc86b               kube-system
	56d9e1a182ba9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   22 seconds ago      Running             kindnet-cni               0                   b31c1ecd2e654       kindnet-xjsdb                          kube-system
	bcfe29abe0722       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   22 seconds ago      Running             kube-proxy                0                   2671f8638d54e       kube-proxy-4l9nn                       kube-system
	7e3b49e98729b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   32 seconds ago      Running             kube-controller-manager   0                   12b6fc8defbaf       kube-controller-manager-pause-738756   kube-system
	8438e531acf65       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   32 seconds ago      Running             kube-apiserver            0                   ccf5b0a02a08e       kube-apiserver-pause-738756            kube-system
	7af2de2ff1798       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   32 seconds ago      Running             etcd                      0                   7165a01bd528d       etcd-pause-738756                      kube-system
	8aa934ad27499       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   32 seconds ago      Running             kube-scheduler            0                   85ab4117a0c9f       kube-scheduler-pause-738756            kube-system
	
	
	==> coredns [ee3fce318e0a45135660390d314eef5598d0f9b44a19c85b18df5bc45372d2b0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54418 - 28068 "HINFO IN 3238780774414035670.8389556130119888341. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.492059225s
	
	
	==> describe nodes <==
	Name:               pause-738756
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-738756
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=pause-738756
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_37_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:37:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-738756
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:37:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:37:27 +0000   Fri, 21 Nov 2025 14:37:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:37:27 +0000   Fri, 21 Nov 2025 14:37:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:37:27 +0000   Fri, 21 Nov 2025 14:37:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:37:27 +0000   Fri, 21 Nov 2025 14:37:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-738756
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                721de5fa-e884-4af4-93de-c2f2a3559246
	  Boot ID:                    4deed74e-d1ae-403f-b303-7338c681df31
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-tc86b                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-pause-738756                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-xjsdb                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-pause-738756             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-pause-738756    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-4l9nn                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-pause-738756             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node pause-738756 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node pause-738756 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node pause-738756 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s   node-controller  Node pause-738756 event: Registered Node pause-738756 in Controller
	  Normal  NodeReady                12s   kubelet          Node pause-738756 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.087005] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024870] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.407014] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 13:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.052324] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023964] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +2.047715] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +4.031557] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +8.511113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[ +16.382337] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[Nov21 13:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	
	
	==> etcd [7af2de2ff1798680465409699de1fb7a564f52aa9c88d8fbedadd9879d1981ba] <==
	{"level":"warn","ts":"2025-11-21T14:37:03.542094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.550063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.557365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.566064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.572866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.580063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.588159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.596831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.612679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.619786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.626622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.633586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.647970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.655642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.662806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.669375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.676911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.683508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.690447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.698375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.704679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.711435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.728366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.741736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.794516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59014","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:37:35 up  1:20,  0 user,  load average: 3.23, 2.48, 1.58
	Linux pause-738756 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [56d9e1a182ba989709141440de730d7378abf4f126e0dd2e04cc6b0b9983fcc3] <==
	I1121 14:37:13.161364       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:37:13.161640       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 14:37:13.161777       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:37:13.161792       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:37:13.161813       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:37:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:37:13.363883       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:37:13.363986       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:37:13.364006       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:37:13.365221       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:37:13.690240       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:37:13.690268       1 metrics.go:72] Registering metrics
	I1121 14:37:13.690348       1 controller.go:711] "Syncing nftables rules"
	I1121 14:37:23.366618       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:37:23.366678       1 main.go:301] handling current node
	I1121 14:37:33.363789       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:37:33.363839       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8438e531acf65cde01ed3f51f011623b945419737cc2076672ef80edc1bade42] <==
	I1121 14:37:04.285681       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1121 14:37:04.285889       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 14:37:04.286549       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 14:37:04.290204       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:37:04.290457       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1121 14:37:04.296629       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:37:04.296900       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:37:04.466932       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:37:05.188019       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:37:05.191627       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:37:05.191644       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:37:05.603629       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:37:05.635660       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:37:05.690439       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:37:05.695433       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1121 14:37:05.696193       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:37:05.699629       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:37:06.215033       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:37:06.898956       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:37:06.908380       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:37:06.914821       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:37:11.371849       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:37:11.869903       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:37:11.874553       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:37:12.017095       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [7e3b49e98729bb65e64035ae4d9e9d363321c322f22b5020b203d05e572e425c] <==
	I1121 14:37:11.191035       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-738756" podCIDRs=["10.244.0.0/24"]
	I1121 14:37:11.213609       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1121 14:37:11.214748       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1121 14:37:11.214768       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 14:37:11.214793       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 14:37:11.215046       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 14:37:11.215524       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 14:37:11.215541       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 14:37:11.215603       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:37:11.215604       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:37:11.215640       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 14:37:11.215640       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 14:37:11.215658       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 14:37:11.215661       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1121 14:37:11.215628       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:37:11.217047       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 14:37:11.217076       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 14:37:11.217114       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 14:37:11.217223       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1121 14:37:11.218681       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:37:11.227932       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 14:37:11.233135       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1121 14:37:11.238319       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 14:37:11.240628       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:37:26.184918       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [bcfe29abe072243bf8e5d03b9436f40d0ef8010d13e70df048b62eaab674dfba] <==
	I1121 14:37:13.020699       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:37:13.089058       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:37:13.189610       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:37:13.189640       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 14:37:13.189713       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:37:13.206677       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:37:13.206715       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:37:13.211524       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:37:13.211956       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:37:13.211989       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:37:13.213219       1 config.go:200] "Starting service config controller"
	I1121 14:37:13.213256       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:37:13.213357       1 config.go:309] "Starting node config controller"
	I1121 14:37:13.213398       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:37:13.213467       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:37:13.213487       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:37:13.213598       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:37:13.213610       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:37:13.313448       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:37:13.313464       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:37:13.314208       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:37:13.314246       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [8aa934ad274990a7ef46badfbc85e87cceabe99a35dd4597de2efe2efb335cbb] <==
	E1121 14:37:04.232180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:37:04.232218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:37:04.232291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:37:04.232284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:37:04.232319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:37:04.232384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:37:04.232453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:37:04.232399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:37:04.232406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:37:04.232422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 14:37:04.232548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:37:04.232590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:37:04.232597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:37:04.232636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:37:04.232717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:37:04.232743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:37:05.035786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:37:05.121803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:37:05.142984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:37:05.159926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:37:05.170010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:37:05.191050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:37:05.328859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:37:05.463406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1121 14:37:05.729632       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:37:11 pause-738756 kubelet[1308]: I1121 14:37:11.245049    1308 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:37:12 pause-738756 kubelet[1308]: I1121 14:37:12.048358    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm97l\" (UniqueName: \"kubernetes.io/projected/648edea0-a50a-4381-99de-f96747b514f1-kube-api-access-vm97l\") pod \"kube-proxy-4l9nn\" (UID: \"648edea0-a50a-4381-99de-f96747b514f1\") " pod="kube-system/kube-proxy-4l9nn"
	Nov 21 14:37:12 pause-738756 kubelet[1308]: I1121 14:37:12.048415    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xxbp\" (UniqueName: \"kubernetes.io/projected/2b5a6c88-1f8c-4d52-8efc-72da2ae7668c-kube-api-access-2xxbp\") pod \"kindnet-xjsdb\" (UID: \"2b5a6c88-1f8c-4d52-8efc-72da2ae7668c\") " pod="kube-system/kindnet-xjsdb"
	Nov 21 14:37:12 pause-738756 kubelet[1308]: I1121 14:37:12.048443    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/648edea0-a50a-4381-99de-f96747b514f1-xtables-lock\") pod \"kube-proxy-4l9nn\" (UID: \"648edea0-a50a-4381-99de-f96747b514f1\") " pod="kube-system/kube-proxy-4l9nn"
	Nov 21 14:37:12 pause-738756 kubelet[1308]: I1121 14:37:12.048522    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/648edea0-a50a-4381-99de-f96747b514f1-kube-proxy\") pod \"kube-proxy-4l9nn\" (UID: \"648edea0-a50a-4381-99de-f96747b514f1\") " pod="kube-system/kube-proxy-4l9nn"
	Nov 21 14:37:12 pause-738756 kubelet[1308]: I1121 14:37:12.048598    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/648edea0-a50a-4381-99de-f96747b514f1-lib-modules\") pod \"kube-proxy-4l9nn\" (UID: \"648edea0-a50a-4381-99de-f96747b514f1\") " pod="kube-system/kube-proxy-4l9nn"
	Nov 21 14:37:12 pause-738756 kubelet[1308]: I1121 14:37:12.048623    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2b5a6c88-1f8c-4d52-8efc-72da2ae7668c-cni-cfg\") pod \"kindnet-xjsdb\" (UID: \"2b5a6c88-1f8c-4d52-8efc-72da2ae7668c\") " pod="kube-system/kindnet-xjsdb"
	Nov 21 14:37:12 pause-738756 kubelet[1308]: I1121 14:37:12.048645    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b5a6c88-1f8c-4d52-8efc-72da2ae7668c-xtables-lock\") pod \"kindnet-xjsdb\" (UID: \"2b5a6c88-1f8c-4d52-8efc-72da2ae7668c\") " pod="kube-system/kindnet-xjsdb"
	Nov 21 14:37:12 pause-738756 kubelet[1308]: I1121 14:37:12.048666    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b5a6c88-1f8c-4d52-8efc-72da2ae7668c-lib-modules\") pod \"kindnet-xjsdb\" (UID: \"2b5a6c88-1f8c-4d52-8efc-72da2ae7668c\") " pod="kube-system/kindnet-xjsdb"
	Nov 21 14:37:12 pause-738756 kubelet[1308]: E1121 14:37:12.154541    1308 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 21 14:37:12 pause-738756 kubelet[1308]: E1121 14:37:12.154596    1308 projected.go:196] Error preparing data for projected volume kube-api-access-vm97l for pod kube-system/kube-proxy-4l9nn: configmap "kube-root-ca.crt" not found
	Nov 21 14:37:12 pause-738756 kubelet[1308]: E1121 14:37:12.154718    1308 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 21 14:37:12 pause-738756 kubelet[1308]: E1121 14:37:12.154739    1308 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/648edea0-a50a-4381-99de-f96747b514f1-kube-api-access-vm97l podName:648edea0-a50a-4381-99de-f96747b514f1 nodeName:}" failed. No retries permitted until 2025-11-21 14:37:12.654715808 +0000 UTC m=+6.008200521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vm97l" (UniqueName: "kubernetes.io/projected/648edea0-a50a-4381-99de-f96747b514f1-kube-api-access-vm97l") pod "kube-proxy-4l9nn" (UID: "648edea0-a50a-4381-99de-f96747b514f1") : configmap "kube-root-ca.crt" not found
	Nov 21 14:37:12 pause-738756 kubelet[1308]: E1121 14:37:12.154748    1308 projected.go:196] Error preparing data for projected volume kube-api-access-2xxbp for pod kube-system/kindnet-xjsdb: configmap "kube-root-ca.crt" not found
	Nov 21 14:37:12 pause-738756 kubelet[1308]: E1121 14:37:12.154872    1308 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b5a6c88-1f8c-4d52-8efc-72da2ae7668c-kube-api-access-2xxbp podName:2b5a6c88-1f8c-4d52-8efc-72da2ae7668c nodeName:}" failed. No retries permitted until 2025-11-21 14:37:12.654842212 +0000 UTC m=+6.008326907 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2xxbp" (UniqueName: "kubernetes.io/projected/2b5a6c88-1f8c-4d52-8efc-72da2ae7668c-kube-api-access-2xxbp") pod "kindnet-xjsdb" (UID: "2b5a6c88-1f8c-4d52-8efc-72da2ae7668c") : configmap "kube-root-ca.crt" not found
	Nov 21 14:37:13 pause-738756 kubelet[1308]: I1121 14:37:13.781768    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4l9nn" podStartSLOduration=1.7817451979999999 podStartE2EDuration="1.781745198s" podCreationTimestamp="2025-11-21 14:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:37:13.781372485 +0000 UTC m=+7.134857217" watchObservedRunningTime="2025-11-21 14:37:13.781745198 +0000 UTC m=+7.135229912"
	Nov 21 14:37:13 pause-738756 kubelet[1308]: I1121 14:37:13.782351    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-xjsdb" podStartSLOduration=1.7823341639999999 podStartE2EDuration="1.782334164s" podCreationTimestamp="2025-11-21 14:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:37:13.771720171 +0000 UTC m=+7.125204885" watchObservedRunningTime="2025-11-21 14:37:13.782334164 +0000 UTC m=+7.135818877"
	Nov 21 14:37:23 pause-738756 kubelet[1308]: I1121 14:37:23.895191    1308 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 21 14:37:23 pause-738756 kubelet[1308]: I1121 14:37:23.929056    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07e1e5d6-3bb7-4e12-8bba-fb901718b11e-config-volume\") pod \"coredns-66bc5c9577-tc86b\" (UID: \"07e1e5d6-3bb7-4e12-8bba-fb901718b11e\") " pod="kube-system/coredns-66bc5c9577-tc86b"
	Nov 21 14:37:23 pause-738756 kubelet[1308]: I1121 14:37:23.929098    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptghh\" (UniqueName: \"kubernetes.io/projected/07e1e5d6-3bb7-4e12-8bba-fb901718b11e-kube-api-access-ptghh\") pod \"coredns-66bc5c9577-tc86b\" (UID: \"07e1e5d6-3bb7-4e12-8bba-fb901718b11e\") " pod="kube-system/coredns-66bc5c9577-tc86b"
	Nov 21 14:37:24 pause-738756 kubelet[1308]: I1121 14:37:24.796454    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tc86b" podStartSLOduration=12.796433599 podStartE2EDuration="12.796433599s" podCreationTimestamp="2025-11-21 14:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:37:24.796227603 +0000 UTC m=+18.149712339" watchObservedRunningTime="2025-11-21 14:37:24.796433599 +0000 UTC m=+18.149918312"
	Nov 21 14:37:33 pause-738756 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 14:37:33 pause-738756 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 14:37:33 pause-738756 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 21 14:37:33 pause-738756 systemd[1]: kubelet.service: Consumed 1.097s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-738756 -n pause-738756
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-738756 -n pause-738756: exit status 2 (310.205239ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-738756 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-738756
helpers_test.go:243: (dbg) docker inspect pause-738756:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1b205bee155e65d38e3b643e92f606dbd8df0de11d9b54efe3560a4f5cdc871e",
	        "Created": "2025-11-21T14:36:48.036828535Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 228030,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:36:48.080046475Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/1b205bee155e65d38e3b643e92f606dbd8df0de11d9b54efe3560a4f5cdc871e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1b205bee155e65d38e3b643e92f606dbd8df0de11d9b54efe3560a4f5cdc871e/hostname",
	        "HostsPath": "/var/lib/docker/containers/1b205bee155e65d38e3b643e92f606dbd8df0de11d9b54efe3560a4f5cdc871e/hosts",
	        "LogPath": "/var/lib/docker/containers/1b205bee155e65d38e3b643e92f606dbd8df0de11d9b54efe3560a4f5cdc871e/1b205bee155e65d38e3b643e92f606dbd8df0de11d9b54efe3560a4f5cdc871e-json.log",
	        "Name": "/pause-738756",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-738756:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-738756",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1b205bee155e65d38e3b643e92f606dbd8df0de11d9b54efe3560a4f5cdc871e",
	                "LowerDir": "/var/lib/docker/overlay2/d6f724eadcbfc07a29479434f01517070c10238bc8a00c09db6548360d21e8b0-init/diff:/var/lib/docker/overlay2/52a35d389e2d282a009a54c52e3f2c3f22a4d7ab5a2644a29d16127d21682576/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d6f724eadcbfc07a29479434f01517070c10238bc8a00c09db6548360d21e8b0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d6f724eadcbfc07a29479434f01517070c10238bc8a00c09db6548360d21e8b0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d6f724eadcbfc07a29479434f01517070c10238bc8a00c09db6548360d21e8b0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-738756",
	                "Source": "/var/lib/docker/volumes/pause-738756/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-738756",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-738756",
	                "name.minikube.sigs.k8s.io": "pause-738756",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5238646e7d40e765eec41c8dcfaf08d80bd2c76fbbde6602795ca4115bfd3540",
	            "SandboxKey": "/var/run/docker/netns/5238646e7d40",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-738756": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2186bb521b00e5c019e854abbbfb8801abf7ffcb6af2eee49c4b81e656682603",
	                    "EndpointID": "2acd6deb0c31df22569fc4c9cceaa6fbaf726b0faef3da3204fa8f9c75cfe9d6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "52:ea:7b:a9:8e:28",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-738756",
	                        "1b205bee155e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-738756 -n pause-738756
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-738756 -n pause-738756: exit status 2 (300.5214ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-738756 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ multinode-384928 ssh -n multinode-384928-m03 sudo cat /home/docker/cp-test_multinode-384928-m02_multinode-384928-m03.txt                                                                                                                      │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ cp      │ multinode-384928 cp testdata/cp-test.txt multinode-384928-m03:/home/docker/cp-test.txt                                                                                                                                                        │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ multinode-384928 ssh -n multinode-384928-m03 sudo cat /home/docker/cp-test.txt                                                                                                                                                                │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ cp      │ multinode-384928 cp multinode-384928-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1537588437/001/cp-test_multinode-384928-m03.txt                                                                                             │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ multinode-384928 ssh -n multinode-384928-m03 sudo cat /home/docker/cp-test.txt                                                                                                                                                                │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ cp      │ multinode-384928 cp multinode-384928-m03:/home/docker/cp-test.txt multinode-384928:/home/docker/cp-test_multinode-384928-m03_multinode-384928.txt                                                                                             │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ multinode-384928 ssh -n multinode-384928-m03 sudo cat /home/docker/cp-test.txt                                                                                                                                                                │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ multinode-384928 ssh -n multinode-384928 sudo cat /home/docker/cp-test_multinode-384928-m03_multinode-384928.txt                                                                                                                              │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ cp      │ multinode-384928 cp multinode-384928-m03:/home/docker/cp-test.txt multinode-384928-m02:/home/docker/cp-test_multinode-384928-m03_multinode-384928-m02.txt                                                                                     │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ multinode-384928 ssh -n multinode-384928-m03 sudo cat /home/docker/cp-test.txt                                                                                                                                                                │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ multinode-384928 ssh -n multinode-384928-m02 sudo cat /home/docker/cp-test_multinode-384928-m03_multinode-384928-m02.txt                                                                                                                      │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ node    │ multinode-384928 node stop m03                                                                                                                                                                                                                │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ node    │ multinode-384928 node start m03 -v=5 --alsologtostderr                                                                                                                                                                                        │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ node    │ list -p multinode-384928                                                                                                                                                                                                                      │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ stop    │ -p multinode-384928                                                                                                                                                                                                                           │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p cert-expiration-046125 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-046125   │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ start   │ -p cert-options-116734 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ delete  │ -p force-systemd-env-653926                                                                                                                                                                                                                   │ force-systemd-env-653926 │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ start   │ -p pause-738756 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                                     │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:37 UTC │
	│ ssh     │ cert-options-116734 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ ssh     │ -p cert-options-116734 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ delete  │ -p cert-options-116734                                                                                                                                                                                                                        │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ start   │ -p old-k8s-version-794941 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │                     │
	│ start   │ -p pause-738756 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:37 UTC │
	│ pause   │ -p pause-738756 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:37:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:37:26.823965  235599 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:37:26.824245  235599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:37:26.824258  235599 out.go:374] Setting ErrFile to fd 2...
	I1121 14:37:26.824264  235599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:37:26.824437  235599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:37:26.824828  235599 out.go:368] Setting JSON to false
	I1121 14:37:26.826000  235599 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4796,"bootTime":1763731051,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:37:26.826089  235599 start.go:143] virtualization: kvm guest
	I1121 14:37:26.828042  235599 out.go:179] * [pause-738756] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:37:26.829391  235599 notify.go:221] Checking for updates...
	I1121 14:37:26.829411  235599 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:37:26.830722  235599 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:37:26.831877  235599 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:37:26.833035  235599 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 14:37:26.833995  235599 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:37:26.834972  235599 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:37:26.836470  235599 config.go:182] Loaded profile config "pause-738756": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:37:26.836964  235599 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:37:26.860069  235599 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:37:26.860143  235599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:37:26.913176  235599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-21 14:37:26.903862876 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:37:26.913268  235599 docker.go:319] overlay module found
	I1121 14:37:26.915825  235599 out.go:179] * Using the docker driver based on existing profile
	I1121 14:37:26.916781  235599 start.go:309] selected driver: docker
	I1121 14:37:26.916794  235599 start.go:930] validating driver "docker" against &{Name:pause-738756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-738756 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:37:26.916951  235599 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:37:26.917025  235599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:37:26.972774  235599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-21 14:37:26.963678484 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:37:26.973535  235599 cni.go:84] Creating CNI manager for ""
	I1121 14:37:26.973619  235599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:37:26.973678  235599 start.go:353] cluster config:
	{Name:pause-738756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-738756 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:37:26.975442  235599 out.go:179] * Starting "pause-738756" primary control-plane node in "pause-738756" cluster
	I1121 14:37:26.976438  235599 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:37:26.977397  235599 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:37:26.978332  235599 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:37:26.978367  235599 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1121 14:37:26.978375  235599 cache.go:65] Caching tarball of preloaded images
	I1121 14:37:26.978434  235599 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:37:26.978452  235599 preload.go:238] Found /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1121 14:37:26.978460  235599 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 14:37:26.978600  235599 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/config.json ...
	I1121 14:37:26.997252  235599 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:37:26.997271  235599 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:37:26.997285  235599 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:37:26.997308  235599 start.go:360] acquireMachinesLock for pause-738756: {Name:mk113b967a7ccc0234ad1b5ee68c8f3782010153 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:37:26.997355  235599 start.go:364] duration metric: took 30.634µs to acquireMachinesLock for "pause-738756"
	I1121 14:37:26.997385  235599 start.go:96] Skipping create...Using existing machine configuration
	I1121 14:37:26.997392  235599 fix.go:54] fixHost starting: 
	I1121 14:37:26.997627  235599 cli_runner.go:164] Run: docker container inspect pause-738756 --format={{.State.Status}}
	I1121 14:37:27.014906  235599 fix.go:112] recreateIfNeeded on pause-738756: state=Running err=<nil>
	W1121 14:37:27.014927  235599 fix.go:138] unexpected machine state, will restart: <nil>
	I1121 14:37:22.203391  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:22.703206  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:23.203686  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:23.703501  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:24.203355  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:24.703180  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:25.203505  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:25.703983  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:26.203646  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:26.703750  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:23.231052  202147 logs.go:123] Gathering logs for kube-controller-manager [830f4be50e039d8a018c60ae0e2cbe357acdd591451f1ac139867871d1d67291] ...
	I1121 14:37:23.231077  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 830f4be50e039d8a018c60ae0e2cbe357acdd591451f1ac139867871d1d67291"
	I1121 14:37:23.258983  202147 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:37:23.259008  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:37:23.305077  202147 logs.go:123] Gathering logs for container status ...
	I1121 14:37:23.305103  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:37:23.333214  202147 logs.go:123] Gathering logs for kubelet ...
	I1121 14:37:23.333234  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:37:23.410624  202147 logs.go:123] Gathering logs for dmesg ...
	I1121 14:37:23.410649  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:37:25.924791  202147 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:37:25.925178  202147 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:37:25.925225  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:37:25.925270  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:37:25.951655  202147 cri.go:89] found id: "5a48743f053dd94ea1c19f18ed0d8b7f694b814d093ea0495600ae2b8d09e855"
	I1121 14:37:25.951675  202147 cri.go:89] found id: ""
	I1121 14:37:25.951683  202147 logs.go:282] 1 containers: [5a48743f053dd94ea1c19f18ed0d8b7f694b814d093ea0495600ae2b8d09e855]
	I1121 14:37:25.951726  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:37:25.955761  202147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:37:25.955820  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:37:25.981218  202147 cri.go:89] found id: ""
	I1121 14:37:25.981235  202147 logs.go:282] 0 containers: []
	W1121 14:37:25.981241  202147 logs.go:284] No container was found matching "etcd"
	I1121 14:37:25.981246  202147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:37:25.981292  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:37:26.006464  202147 cri.go:89] found id: ""
	I1121 14:37:26.006494  202147 logs.go:282] 0 containers: []
	W1121 14:37:26.006504  202147 logs.go:284] No container was found matching "coredns"
	I1121 14:37:26.006510  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:37:26.006548  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:37:26.031428  202147 cri.go:89] found id: "3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:37:26.031447  202147 cri.go:89] found id: ""
	I1121 14:37:26.031458  202147 logs.go:282] 1 containers: [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980]
	I1121 14:37:26.031509  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:37:26.035021  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:37:26.035069  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:37:26.059852  202147 cri.go:89] found id: ""
	I1121 14:37:26.059873  202147 logs.go:282] 0 containers: []
	W1121 14:37:26.059881  202147 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:37:26.059889  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:37:26.059938  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:37:26.084839  202147 cri.go:89] found id: "830f4be50e039d8a018c60ae0e2cbe357acdd591451f1ac139867871d1d67291"
	I1121 14:37:26.084856  202147 cri.go:89] found id: ""
	I1121 14:37:26.084863  202147 logs.go:282] 1 containers: [830f4be50e039d8a018c60ae0e2cbe357acdd591451f1ac139867871d1d67291]
	I1121 14:37:26.084903  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:37:26.088434  202147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:37:26.088475  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:37:26.113030  202147 cri.go:89] found id: ""
	I1121 14:37:26.113047  202147 logs.go:282] 0 containers: []
	W1121 14:37:26.113055  202147 logs.go:284] No container was found matching "kindnet"
	I1121 14:37:26.113062  202147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:37:26.113103  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:37:26.138004  202147 cri.go:89] found id: ""
	I1121 14:37:26.138024  202147 logs.go:282] 0 containers: []
	W1121 14:37:26.138033  202147 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:37:26.138043  202147 logs.go:123] Gathering logs for kubelet ...
	I1121 14:37:26.138055  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:37:26.214421  202147 logs.go:123] Gathering logs for dmesg ...
	I1121 14:37:26.214446  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:37:26.228700  202147 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:37:26.228730  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:37:26.287004  202147 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:37:26.287024  202147 logs.go:123] Gathering logs for kube-apiserver [5a48743f053dd94ea1c19f18ed0d8b7f694b814d093ea0495600ae2b8d09e855] ...
	I1121 14:37:26.287035  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a48743f053dd94ea1c19f18ed0d8b7f694b814d093ea0495600ae2b8d09e855"
	I1121 14:37:26.317758  202147 logs.go:123] Gathering logs for kube-scheduler [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980] ...
	I1121 14:37:26.317781  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:37:26.361940  202147 logs.go:123] Gathering logs for kube-controller-manager [830f4be50e039d8a018c60ae0e2cbe357acdd591451f1ac139867871d1d67291] ...
	I1121 14:37:26.361965  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 830f4be50e039d8a018c60ae0e2cbe357acdd591451f1ac139867871d1d67291"
	I1121 14:37:26.386628  202147 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:37:26.386651  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:37:26.429216  202147 logs.go:123] Gathering logs for container status ...
	I1121 14:37:26.429242  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:37:27.016730  235599 out.go:252] * Updating the running docker "pause-738756" container ...
	I1121 14:37:27.016759  235599 machine.go:94] provisionDockerMachine start ...
	I1121 14:37:27.016833  235599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-738756
	I1121 14:37:27.033998  235599 main.go:143] libmachine: Using SSH client type: native
	I1121 14:37:27.034236  235599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1121 14:37:27.034249  235599 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:37:27.163036  235599 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-738756
	
	I1121 14:37:27.163059  235599 ubuntu.go:182] provisioning hostname "pause-738756"
	I1121 14:37:27.163105  235599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-738756
	I1121 14:37:27.180930  235599 main.go:143] libmachine: Using SSH client type: native
	I1121 14:37:27.181148  235599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1121 14:37:27.181165  235599 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-738756 && echo "pause-738756" | sudo tee /etc/hostname
	I1121 14:37:27.320455  235599 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-738756
	
	I1121 14:37:27.320531  235599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-738756
	I1121 14:37:27.337616  235599 main.go:143] libmachine: Using SSH client type: native
	I1121 14:37:27.337835  235599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1121 14:37:27.337862  235599 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-738756' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-738756/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-738756' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:37:27.466533  235599 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:37:27.466554  235599 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11045/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11045/.minikube}
	I1121 14:37:27.466590  235599 ubuntu.go:190] setting up certificates
	I1121 14:37:27.466603  235599 provision.go:84] configureAuth start
	I1121 14:37:27.466655  235599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-738756
	I1121 14:37:27.484261  235599 provision.go:143] copyHostCerts
	I1121 14:37:27.484336  235599 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem, removing ...
	I1121 14:37:27.484352  235599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem
	I1121 14:37:27.484422  235599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem (1123 bytes)
	I1121 14:37:27.484527  235599 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem, removing ...
	I1121 14:37:27.484535  235599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem
	I1121 14:37:27.484589  235599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem (1679 bytes)
	I1121 14:37:27.484681  235599 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem, removing ...
	I1121 14:37:27.484689  235599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem
	I1121 14:37:27.484716  235599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem (1078 bytes)
	I1121 14:37:27.484797  235599 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem org=jenkins.pause-738756 san=[127.0.0.1 192.168.85.2 localhost minikube pause-738756]
	I1121 14:37:27.922323  235599 provision.go:177] copyRemoteCerts
	I1121 14:37:27.922371  235599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:37:27.922407  235599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-738756
	I1121 14:37:27.939749  235599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/pause-738756/id_rsa Username:docker}
	I1121 14:37:28.035165  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:37:28.052029  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1121 14:37:28.069673  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 14:37:28.085966  235599 provision.go:87] duration metric: took 619.351402ms to configureAuth
	I1121 14:37:28.085990  235599 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:37:28.086154  235599 config.go:182] Loaded profile config "pause-738756": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:37:28.086249  235599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-738756
	I1121 14:37:28.103649  235599 main.go:143] libmachine: Using SSH client type: native
	I1121 14:37:28.103833  235599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1121 14:37:28.103850  235599 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:37:28.421299  235599 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:37:28.421319  235599 machine.go:97] duration metric: took 1.404554171s to provisionDockerMachine
	I1121 14:37:28.421329  235599 start.go:293] postStartSetup for "pause-738756" (driver="docker")
	I1121 14:37:28.421338  235599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:37:28.421406  235599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:37:28.421444  235599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-738756
	I1121 14:37:28.442262  235599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/pause-738756/id_rsa Username:docker}
	I1121 14:37:28.536082  235599 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:37:28.539507  235599 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:37:28.539540  235599 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:37:28.539550  235599 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/addons for local assets ...
	I1121 14:37:28.539625  235599 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/files for local assets ...
	I1121 14:37:28.539703  235599 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem -> 145422.pem in /etc/ssl/certs
	I1121 14:37:28.539784  235599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:37:28.547397  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:37:28.564179  235599 start.go:296] duration metric: took 142.837844ms for postStartSetup
	I1121 14:37:28.564245  235599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:37:28.564299  235599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-738756
	I1121 14:37:28.582495  235599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/pause-738756/id_rsa Username:docker}
	I1121 14:37:28.674349  235599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:37:28.679213  235599 fix.go:56] duration metric: took 1.681814373s for fixHost
	I1121 14:37:28.679236  235599 start.go:83] releasing machines lock for "pause-738756", held for 1.681869567s
	I1121 14:37:28.679300  235599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-738756
	I1121 14:37:28.696780  235599 ssh_runner.go:195] Run: cat /version.json
	I1121 14:37:28.696817  235599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-738756
	I1121 14:37:28.696874  235599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:37:28.696958  235599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-738756
	I1121 14:37:28.715386  235599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/pause-738756/id_rsa Username:docker}
	I1121 14:37:28.716113  235599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/pause-738756/id_rsa Username:docker}
	I1121 14:37:28.809134  235599 ssh_runner.go:195] Run: systemctl --version
	I1121 14:37:28.898271  235599 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:37:28.933187  235599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:37:28.938604  235599 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:37:28.938669  235599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:37:28.946479  235599 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 14:37:28.946499  235599 start.go:496] detecting cgroup driver to use...
	I1121 14:37:28.946532  235599 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:37:28.946592  235599 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:37:28.961473  235599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:37:28.974109  235599 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:37:28.974154  235599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:37:28.988136  235599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:37:28.999500  235599 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:37:29.111079  235599 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:37:29.222433  235599 docker.go:234] disabling docker service ...
	I1121 14:37:29.222491  235599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:37:29.237157  235599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:37:29.249836  235599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:37:29.364799  235599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:37:29.473737  235599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:37:29.485821  235599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:37:29.499382  235599 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 14:37:29.499453  235599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:37:29.508010  235599 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1121 14:37:29.508050  235599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:37:29.516325  235599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:37:29.524319  235599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:37:29.532273  235599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:37:29.539619  235599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:37:29.547812  235599 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:37:29.555426  235599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:37:29.563506  235599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:37:29.570335  235599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:37:29.577626  235599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:37:29.682841  235599 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:37:29.859994  235599 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:37:29.860062  235599 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:37:29.864223  235599 start.go:564] Will wait 60s for crictl version
	I1121 14:37:29.864266  235599 ssh_runner.go:195] Run: which crictl
	I1121 14:37:29.867667  235599 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:37:29.890231  235599 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:37:29.890284  235599 ssh_runner.go:195] Run: crio --version
	I1121 14:37:29.916621  235599 ssh_runner.go:195] Run: crio --version
	I1121 14:37:29.944543  235599 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 14:37:29.945534  235599 cli_runner.go:164] Run: docker network inspect pause-738756 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:37:29.964909  235599 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:37:29.968930  235599 kubeadm.go:884] updating cluster {Name:pause-738756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-738756 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:37:29.969067  235599 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:37:29.969109  235599 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:37:30.001100  235599 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:37:30.001122  235599 crio.go:433] Images already preloaded, skipping extraction
	I1121 14:37:30.001167  235599 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:37:30.025614  235599 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:37:30.025631  235599 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:37:30.025638  235599 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1121 14:37:30.025717  235599 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-738756 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-738756 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:37:30.025777  235599 ssh_runner.go:195] Run: crio config
	I1121 14:37:30.069234  235599 cni.go:84] Creating CNI manager for ""
	I1121 14:37:30.069252  235599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:37:30.069267  235599 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:37:30.069287  235599 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-738756 NodeName:pause-738756 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:37:30.069395  235599 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-738756"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:37:30.069444  235599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:37:30.077645  235599 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:37:30.077697  235599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:37:30.085132  235599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1121 14:37:30.097513  235599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:37:30.109507  235599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1121 14:37:30.121344  235599 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:37:30.124938  235599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:37:30.231736  235599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:37:30.244451  235599 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756 for IP: 192.168.85.2
	I1121 14:37:30.244467  235599 certs.go:195] generating shared ca certs ...
	I1121 14:37:30.244482  235599 certs.go:227] acquiring lock for ca certs: {Name:mkde3a7d6f17b238f06eab3a140993599f1b4367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:37:30.244639  235599 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key
	I1121 14:37:30.244679  235599 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key
	I1121 14:37:30.244689  235599 certs.go:257] generating profile certs ...
	I1121 14:37:30.244771  235599 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/client.key
	I1121 14:37:30.244825  235599 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/apiserver.key.60ca3139
	I1121 14:37:30.244863  235599 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/proxy-client.key
	I1121 14:37:30.244960  235599 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem (1338 bytes)
	W1121 14:37:30.244986  235599 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542_empty.pem, impossibly tiny 0 bytes
	I1121 14:37:30.244995  235599 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:37:30.245017  235599 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:37:30.245044  235599 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:37:30.245066  235599 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem (1679 bytes)
	I1121 14:37:30.245102  235599 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:37:30.245621  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:37:30.264457  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:37:30.282826  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:37:30.299838  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1121 14:37:30.316554  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1121 14:37:30.333890  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:37:30.350043  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:37:30.366133  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1121 14:37:30.382194  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:37:30.398036  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem --> /usr/share/ca-certificates/14542.pem (1338 bytes)
	I1121 14:37:30.415361  235599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /usr/share/ca-certificates/145422.pem (1708 bytes)
	I1121 14:37:30.431630  235599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:37:30.443274  235599 ssh_runner.go:195] Run: openssl version
	I1121 14:37:30.449049  235599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:37:30.456985  235599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:37:30.460353  235599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:37:30.460400  235599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:37:30.493678  235599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:37:30.501321  235599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14542.pem && ln -fs /usr/share/ca-certificates/14542.pem /etc/ssl/certs/14542.pem"
	I1121 14:37:30.509165  235599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14542.pem
	I1121 14:37:30.512555  235599 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14542.pem
	I1121 14:37:30.512612  235599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14542.pem
	I1121 14:37:30.546740  235599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14542.pem /etc/ssl/certs/51391683.0"
	I1121 14:37:30.554166  235599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145422.pem && ln -fs /usr/share/ca-certificates/145422.pem /etc/ssl/certs/145422.pem"
	I1121 14:37:30.561812  235599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145422.pem
	I1121 14:37:30.565213  235599 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145422.pem
	I1121 14:37:30.565253  235599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145422.pem
	I1121 14:37:30.599774  235599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145422.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:37:30.607616  235599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:37:30.611250  235599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 14:37:30.647947  235599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 14:37:30.683997  235599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 14:37:30.719784  235599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 14:37:30.756431  235599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 14:37:30.792639  235599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 14:37:30.828228  235599 kubeadm.go:401] StartCluster: {Name:pause-738756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-738756 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:37:30.828337  235599 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:37:30.828374  235599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:37:30.854349  235599 cri.go:89] found id: "ee3fce318e0a45135660390d314eef5598d0f9b44a19c85b18df5bc45372d2b0"
	I1121 14:37:30.854370  235599 cri.go:89] found id: "56d9e1a182ba989709141440de730d7378abf4f126e0dd2e04cc6b0b9983fcc3"
	I1121 14:37:30.854377  235599 cri.go:89] found id: "bcfe29abe072243bf8e5d03b9436f40d0ef8010d13e70df048b62eaab674dfba"
	I1121 14:37:30.854382  235599 cri.go:89] found id: "7e3b49e98729bb65e64035ae4d9e9d363321c322f22b5020b203d05e572e425c"
	I1121 14:37:30.854387  235599 cri.go:89] found id: "8438e531acf65cde01ed3f51f011623b945419737cc2076672ef80edc1bade42"
	I1121 14:37:30.854392  235599 cri.go:89] found id: "7af2de2ff1798680465409699de1fb7a564f52aa9c88d8fbedadd9879d1981ba"
	I1121 14:37:30.854396  235599 cri.go:89] found id: "8aa934ad274990a7ef46badfbc85e87cceabe99a35dd4597de2efe2efb335cbb"
	I1121 14:37:30.854403  235599 cri.go:89] found id: ""
	I1121 14:37:30.854436  235599 ssh_runner.go:195] Run: sudo runc list -f json
	W1121 14:37:30.865698  235599 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:37:30Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:37:30.865759  235599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:37:30.873378  235599 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 14:37:30.873391  235599 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 14:37:30.873421  235599 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 14:37:30.880096  235599 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:37:30.880794  235599 kubeconfig.go:125] found "pause-738756" server: "https://192.168.85.2:8443"
	I1121 14:37:30.881578  235599 kapi.go:59] client config for pause-738756: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/client.crt", KeyFile:"/home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/client.key", CAFile:"/home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1121 14:37:30.881947  235599 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1121 14:37:30.881965  235599 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1121 14:37:30.881969  235599 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1121 14:37:30.881974  235599 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1121 14:37:30.881978  235599 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1121 14:37:30.882264  235599 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 14:37:30.889426  235599 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1121 14:37:30.889453  235599 kubeadm.go:602] duration metric: took 16.055847ms to restartPrimaryControlPlane
	I1121 14:37:30.889462  235599 kubeadm.go:403] duration metric: took 61.239917ms to StartCluster
	I1121 14:37:30.889477  235599 settings.go:142] acquiring lock: {Name:mkb207cf001a407898b2dbfd9fb9b3881f173a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:37:30.889542  235599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:37:30.890464  235599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:37:30.890733  235599 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:37:30.890829  235599 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:37:30.890983  235599 config.go:182] Loaded profile config "pause-738756": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:37:30.893122  235599 out.go:179] * Verifying Kubernetes components...
	I1121 14:37:30.893127  235599 out.go:179] * Enabled addons: 
	I1121 14:37:27.203912  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:27.703288  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:28.204101  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:28.703754  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:29.203217  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:29.703744  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:30.203896  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:30.703526  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:31.203639  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:31.703990  230782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:37:31.771402  230782 kubeadm.go:1114] duration metric: took 11.652724666s to wait for elevateKubeSystemPrivileges
	I1121 14:37:31.771441  230782 kubeadm.go:403] duration metric: took 21.101889143s to StartCluster
	I1121 14:37:31.771461  230782 settings.go:142] acquiring lock: {Name:mkb207cf001a407898b2dbfd9fb9b3881f173a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:37:31.771573  230782 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:37:31.772968  230782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:37:31.773204  230782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:37:31.773228  230782 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:37:31.773277  230782 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:37:31.773374  230782 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-794941"
	I1121 14:37:31.773394  230782 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-794941"
	I1121 14:37:31.773402  230782 config.go:182] Loaded profile config "old-k8s-version-794941": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1121 14:37:31.773418  230782 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-794941"
	I1121 14:37:31.773424  230782 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-794941"
	I1121 14:37:31.773456  230782 host.go:66] Checking if "old-k8s-version-794941" exists ...
	I1121 14:37:31.773835  230782 cli_runner.go:164] Run: docker container inspect old-k8s-version-794941 --format={{.State.Status}}
	I1121 14:37:31.773937  230782 cli_runner.go:164] Run: docker container inspect old-k8s-version-794941 --format={{.State.Status}}
	I1121 14:37:31.774522  230782 out.go:179] * Verifying Kubernetes components...
	I1121 14:37:31.775805  230782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:37:31.796510  230782 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:37:30.894200  235599 addons.go:530] duration metric: took 3.372439ms for enable addons: enabled=[]
	I1121 14:37:30.894224  235599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:37:31.015208  235599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:37:31.030245  235599 node_ready.go:35] waiting up to 6m0s for node "pause-738756" to be "Ready" ...
	I1121 14:37:31.039055  235599 node_ready.go:49] node "pause-738756" is "Ready"
	I1121 14:37:31.039075  235599 node_ready.go:38] duration metric: took 8.788645ms for node "pause-738756" to be "Ready" ...
	I1121 14:37:31.039087  235599 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:37:31.039124  235599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:37:31.050032  235599 api_server.go:72] duration metric: took 159.263491ms to wait for apiserver process to appear ...
	I1121 14:37:31.050052  235599 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:37:31.050070  235599 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:37:31.053783  235599 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1121 14:37:31.054842  235599 api_server.go:141] control plane version: v1.34.1
	I1121 14:37:31.054861  235599 api_server.go:131] duration metric: took 4.803486ms to wait for apiserver health ...
	I1121 14:37:31.054868  235599 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:37:31.057957  235599 system_pods.go:59] 7 kube-system pods found
	I1121 14:37:31.057984  235599 system_pods.go:61] "coredns-66bc5c9577-tc86b" [07e1e5d6-3bb7-4e12-8bba-fb901718b11e] Running
	I1121 14:37:31.057990  235599 system_pods.go:61] "etcd-pause-738756" [0bf4da64-b88f-4dc0-af80-bb5df7271b6a] Running
	I1121 14:37:31.057994  235599 system_pods.go:61] "kindnet-xjsdb" [2b5a6c88-1f8c-4d52-8efc-72da2ae7668c] Running
	I1121 14:37:31.057998  235599 system_pods.go:61] "kube-apiserver-pause-738756" [1f962bf1-1ac8-4efe-ad8a-95b35c942fb6] Running
	I1121 14:37:31.058002  235599 system_pods.go:61] "kube-controller-manager-pause-738756" [47f89ac6-c579-41b5-beda-dad24ac8b3ef] Running
	I1121 14:37:31.058010  235599 system_pods.go:61] "kube-proxy-4l9nn" [648edea0-a50a-4381-99de-f96747b514f1] Running
	I1121 14:37:31.058014  235599 system_pods.go:61] "kube-scheduler-pause-738756" [39b65a58-441f-41ce-826d-e81aa995ff39] Running
	I1121 14:37:31.058021  235599 system_pods.go:74] duration metric: took 3.147989ms to wait for pod list to return data ...
	I1121 14:37:31.058030  235599 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:37:31.059755  235599 default_sa.go:45] found service account: "default"
	I1121 14:37:31.059769  235599 default_sa.go:55] duration metric: took 1.734863ms for default service account to be created ...
	I1121 14:37:31.059776  235599 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:37:31.061878  235599 system_pods.go:86] 7 kube-system pods found
	I1121 14:37:31.061899  235599 system_pods.go:89] "coredns-66bc5c9577-tc86b" [07e1e5d6-3bb7-4e12-8bba-fb901718b11e] Running
	I1121 14:37:31.061903  235599 system_pods.go:89] "etcd-pause-738756" [0bf4da64-b88f-4dc0-af80-bb5df7271b6a] Running
	I1121 14:37:31.061907  235599 system_pods.go:89] "kindnet-xjsdb" [2b5a6c88-1f8c-4d52-8efc-72da2ae7668c] Running
	I1121 14:37:31.061910  235599 system_pods.go:89] "kube-apiserver-pause-738756" [1f962bf1-1ac8-4efe-ad8a-95b35c942fb6] Running
	I1121 14:37:31.061916  235599 system_pods.go:89] "kube-controller-manager-pause-738756" [47f89ac6-c579-41b5-beda-dad24ac8b3ef] Running
	I1121 14:37:31.061919  235599 system_pods.go:89] "kube-proxy-4l9nn" [648edea0-a50a-4381-99de-f96747b514f1] Running
	I1121 14:37:31.061922  235599 system_pods.go:89] "kube-scheduler-pause-738756" [39b65a58-441f-41ce-826d-e81aa995ff39] Running
	I1121 14:37:31.061928  235599 system_pods.go:126] duration metric: took 2.147436ms to wait for k8s-apps to be running ...
	I1121 14:37:31.061935  235599 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:37:31.061969  235599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:37:31.073453  235599 system_svc.go:56] duration metric: took 11.51265ms WaitForService to wait for kubelet
	I1121 14:37:31.073474  235599 kubeadm.go:587] duration metric: took 182.707582ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:37:31.073492  235599 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:37:31.075180  235599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:37:31.075203  235599 node_conditions.go:123] node cpu capacity is 8
	I1121 14:37:31.075217  235599 node_conditions.go:105] duration metric: took 1.718622ms to run NodePressure ...
	I1121 14:37:31.075230  235599 start.go:242] waiting for startup goroutines ...
	I1121 14:37:31.075241  235599 start.go:247] waiting for cluster config update ...
	I1121 14:37:31.075254  235599 start.go:256] writing updated cluster config ...
	I1121 14:37:31.075536  235599 ssh_runner.go:195] Run: rm -f paused
	I1121 14:37:31.078967  235599 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:37:31.079592  235599 kapi.go:59] client config for pause-738756: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/client.crt", KeyFile:"/home/jenkins/minikube-integration/21847-11045/.minikube/profiles/pause-738756/client.key", CAFile:"/home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1121 14:37:31.081772  235599 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tc86b" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:31.085112  235599 pod_ready.go:94] pod "coredns-66bc5c9577-tc86b" is "Ready"
	I1121 14:37:31.085129  235599 pod_ready.go:86] duration metric: took 3.340869ms for pod "coredns-66bc5c9577-tc86b" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:31.086620  235599 pod_ready.go:83] waiting for pod "etcd-pause-738756" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:31.089972  235599 pod_ready.go:94] pod "etcd-pause-738756" is "Ready"
	I1121 14:37:31.089989  235599 pod_ready.go:86] duration metric: took 3.351817ms for pod "etcd-pause-738756" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:31.091695  235599 pod_ready.go:83] waiting for pod "kube-apiserver-pause-738756" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:31.094803  235599 pod_ready.go:94] pod "kube-apiserver-pause-738756" is "Ready"
	I1121 14:37:31.094823  235599 pod_ready.go:86] duration metric: took 3.111829ms for pod "kube-apiserver-pause-738756" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:31.096281  235599 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-738756" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:31.483026  235599 pod_ready.go:94] pod "kube-controller-manager-pause-738756" is "Ready"
	I1121 14:37:31.483055  235599 pod_ready.go:86] duration metric: took 386.756153ms for pod "kube-controller-manager-pause-738756" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:31.683291  235599 pod_ready.go:83] waiting for pod "kube-proxy-4l9nn" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:31.797209  230782 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-794941"
	I1121 14:37:31.797255  230782 host.go:66] Checking if "old-k8s-version-794941" exists ...
	I1121 14:37:31.797650  230782 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:37:31.797672  230782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:37:31.797722  230782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-794941
	I1121 14:37:31.797757  230782 cli_runner.go:164] Run: docker container inspect old-k8s-version-794941 --format={{.State.Status}}
	I1121 14:37:31.824362  230782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/old-k8s-version-794941/id_rsa Username:docker}
	I1121 14:37:31.827144  230782 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:37:31.827168  230782 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:37:31.827241  230782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-794941
	I1121 14:37:31.847003  230782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/old-k8s-version-794941/id_rsa Username:docker}
	I1121 14:37:31.859803  230782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:37:31.917999  230782 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:37:31.936937  230782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:37:31.956427  230782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:37:32.073710  230782 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1121 14:37:32.075124  230782 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-794941" to be "Ready" ...
	I1121 14:37:32.300737  230782 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:37:32.084590  235599 pod_ready.go:94] pod "kube-proxy-4l9nn" is "Ready"
	I1121 14:37:32.084615  235599 pod_ready.go:86] duration metric: took 401.298514ms for pod "kube-proxy-4l9nn" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:32.283179  235599 pod_ready.go:83] waiting for pod "kube-scheduler-pause-738756" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:32.683716  235599 pod_ready.go:94] pod "kube-scheduler-pause-738756" is "Ready"
	I1121 14:37:32.683742  235599 pod_ready.go:86] duration metric: took 400.536666ms for pod "kube-scheduler-pause-738756" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:32.683756  235599 pod_ready.go:40] duration metric: took 1.60476816s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:37:32.727107  235599 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:37:32.728780  235599 out.go:179] * Done! kubectl is now configured to use "pause-738756" cluster and "default" namespace by default
	I1121 14:37:28.957192  202147 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	
	
	==> CRI-O <==
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.777516551Z" level=info msg="RDT not available in the host system"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.777524826Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.778264996Z" level=info msg="Conmon does support the --sync option"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.778282408Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.778293949Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.778959942Z" level=info msg="Conmon does support the --sync option"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.778973291Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.782778676Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.782802788Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.783440102Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.783933501Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.783995459Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.855870213Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-tc86b Namespace:kube-system ID:5fe88083690acd038be4fe16e709df436c8218e4f6d9cb2dfb0ebbb59ec1847b UID:07e1e5d6-3bb7-4e12-8bba-fb901718b11e NetNS:/var/run/netns/0564deae-79ad-48af-9c01-3299a3b4fd8b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009040c0}] Aliases:map[]}"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856026127Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-tc86b for CNI network kindnet (type=ptp)"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856409673Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856430453Z" level=info msg="Starting seccomp notifier watcher"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856504371Z" level=info msg="Create NRI interface"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856584529Z" level=info msg="built-in NRI default validator is disabled"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856598591Z" level=info msg="runtime interface created"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856611886Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856619607Z" level=info msg="runtime interface starting up..."
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856626766Z" level=info msg="starting plugins..."
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856637043Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 21 14:37:29 pause-738756 crio[2171]: time="2025-11-21T14:37:29.856861369Z" level=info msg="No systemd watchdog enabled"
	Nov 21 14:37:29 pause-738756 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	ee3fce318e0a4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   0                   5fe88083690ac       coredns-66bc5c9577-tc86b               kube-system
	56d9e1a182ba9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   23 seconds ago      Running             kindnet-cni               0                   b31c1ecd2e654       kindnet-xjsdb                          kube-system
	bcfe29abe0722       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   23 seconds ago      Running             kube-proxy                0                   2671f8638d54e       kube-proxy-4l9nn                       kube-system
	7e3b49e98729b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   34 seconds ago      Running             kube-controller-manager   0                   12b6fc8defbaf       kube-controller-manager-pause-738756   kube-system
	8438e531acf65       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   34 seconds ago      Running             kube-apiserver            0                   ccf5b0a02a08e       kube-apiserver-pause-738756            kube-system
	7af2de2ff1798       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   34 seconds ago      Running             etcd                      0                   7165a01bd528d       etcd-pause-738756                      kube-system
	8aa934ad27499       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   34 seconds ago      Running             kube-scheduler            0                   85ab4117a0c9f       kube-scheduler-pause-738756            kube-system
	
	
	==> coredns [ee3fce318e0a45135660390d314eef5598d0f9b44a19c85b18df5bc45372d2b0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54418 - 28068 "HINFO IN 3238780774414035670.8389556130119888341. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.492059225s
	
	
	==> describe nodes <==
	Name:               pause-738756
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-738756
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=pause-738756
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_37_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:37:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-738756
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:37:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:37:27 +0000   Fri, 21 Nov 2025 14:37:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:37:27 +0000   Fri, 21 Nov 2025 14:37:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:37:27 +0000   Fri, 21 Nov 2025 14:37:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:37:27 +0000   Fri, 21 Nov 2025 14:37:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-738756
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                721de5fa-e884-4af4-93de-c2f2a3559246
	  Boot ID:                    4deed74e-d1ae-403f-b303-7338c681df31
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-tc86b                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-pause-738756                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-xjsdb                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-pause-738756             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-pause-738756    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-4l9nn                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-pause-738756             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node pause-738756 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node pause-738756 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node pause-738756 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node pause-738756 event: Registered Node pause-738756 in Controller
	  Normal  NodeReady                13s   kubelet          Node pause-738756 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.087005] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024870] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.407014] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 13:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.052324] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023964] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +2.047715] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +4.031557] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +8.511113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[ +16.382337] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[Nov21 13:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	
	
	==> etcd [7af2de2ff1798680465409699de1fb7a564f52aa9c88d8fbedadd9879d1981ba] <==
	{"level":"warn","ts":"2025-11-21T14:37:03.542094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.550063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.557365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.566064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.572866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.580063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.588159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.596831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.612679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.619786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.626622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.633586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.647970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.655642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.662806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.669375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.676911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.683508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.690447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.698375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.704679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.711435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.728366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.741736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:37:03.794516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59014","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:37:37 up  1:20,  0 user,  load average: 3.23, 2.48, 1.58
	Linux pause-738756 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [56d9e1a182ba989709141440de730d7378abf4f126e0dd2e04cc6b0b9983fcc3] <==
	I1121 14:37:13.161364       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:37:13.161640       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 14:37:13.161777       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:37:13.161792       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:37:13.161813       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:37:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:37:13.363883       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:37:13.363986       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:37:13.364006       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:37:13.365221       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:37:13.690240       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:37:13.690268       1 metrics.go:72] Registering metrics
	I1121 14:37:13.690348       1 controller.go:711] "Syncing nftables rules"
	I1121 14:37:23.366618       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:37:23.366678       1 main.go:301] handling current node
	I1121 14:37:33.363789       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:37:33.363839       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8438e531acf65cde01ed3f51f011623b945419737cc2076672ef80edc1bade42] <==
	I1121 14:37:04.285681       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1121 14:37:04.285889       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 14:37:04.286549       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 14:37:04.290204       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:37:04.290457       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1121 14:37:04.296629       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:37:04.296900       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:37:04.466932       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:37:05.188019       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:37:05.191627       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:37:05.191644       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:37:05.603629       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:37:05.635660       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:37:05.690439       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:37:05.695433       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1121 14:37:05.696193       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:37:05.699629       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:37:06.215033       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:37:06.898956       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:37:06.908380       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:37:06.914821       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:37:11.371849       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:37:11.869903       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:37:11.874553       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:37:12.017095       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [7e3b49e98729bb65e64035ae4d9e9d363321c322f22b5020b203d05e572e425c] <==
	I1121 14:37:11.191035       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-738756" podCIDRs=["10.244.0.0/24"]
	I1121 14:37:11.213609       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1121 14:37:11.214748       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1121 14:37:11.214768       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 14:37:11.214793       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 14:37:11.215046       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 14:37:11.215524       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 14:37:11.215541       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 14:37:11.215603       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:37:11.215604       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:37:11.215640       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 14:37:11.215640       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 14:37:11.215658       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 14:37:11.215661       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1121 14:37:11.215628       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:37:11.217047       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 14:37:11.217076       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 14:37:11.217114       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 14:37:11.217223       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1121 14:37:11.218681       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:37:11.227932       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 14:37:11.233135       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1121 14:37:11.238319       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 14:37:11.240628       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:37:26.184918       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [bcfe29abe072243bf8e5d03b9436f40d0ef8010d13e70df048b62eaab674dfba] <==
	I1121 14:37:13.020699       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:37:13.089058       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:37:13.189610       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:37:13.189640       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 14:37:13.189713       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:37:13.206677       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:37:13.206715       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:37:13.211524       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:37:13.211956       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:37:13.211989       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:37:13.213219       1 config.go:200] "Starting service config controller"
	I1121 14:37:13.213256       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:37:13.213357       1 config.go:309] "Starting node config controller"
	I1121 14:37:13.213398       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:37:13.213467       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:37:13.213487       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:37:13.213598       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:37:13.213610       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:37:13.313448       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:37:13.313464       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:37:13.314208       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:37:13.314246       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [8aa934ad274990a7ef46badfbc85e87cceabe99a35dd4597de2efe2efb335cbb] <==
	E1121 14:37:04.232180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:37:04.232218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:37:04.232291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:37:04.232284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:37:04.232319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:37:04.232384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:37:04.232453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:37:04.232399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:37:04.232406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:37:04.232422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 14:37:04.232548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:37:04.232590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:37:04.232597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:37:04.232636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:37:04.232717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:37:04.232743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:37:05.035786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:37:05.121803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:37:05.142984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:37:05.159926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:37:05.170010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:37:05.191050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:37:05.328859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:37:05.463406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1121 14:37:05.729632       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:37:11 pause-738756 kubelet[1308]: I1121 14:37:11.245049    1308 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:37:12 pause-738756 kubelet[1308]: I1121 14:37:12.048358    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm97l\" (UniqueName: \"kubernetes.io/projected/648edea0-a50a-4381-99de-f96747b514f1-kube-api-access-vm97l\") pod \"kube-proxy-4l9nn\" (UID: \"648edea0-a50a-4381-99de-f96747b514f1\") " pod="kube-system/kube-proxy-4l9nn"
	Nov 21 14:37:12 pause-738756 kubelet[1308]: I1121 14:37:12.048415    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xxbp\" (UniqueName: \"kubernetes.io/projected/2b5a6c88-1f8c-4d52-8efc-72da2ae7668c-kube-api-access-2xxbp\") pod \"kindnet-xjsdb\" (UID: \"2b5a6c88-1f8c-4d52-8efc-72da2ae7668c\") " pod="kube-system/kindnet-xjsdb"
	Nov 21 14:37:12 pause-738756 kubelet[1308]: I1121 14:37:12.048443    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/648edea0-a50a-4381-99de-f96747b514f1-xtables-lock\") pod \"kube-proxy-4l9nn\" (UID: \"648edea0-a50a-4381-99de-f96747b514f1\") " pod="kube-system/kube-proxy-4l9nn"
	Nov 21 14:37:12 pause-738756 kubelet[1308]: I1121 14:37:12.048522    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/648edea0-a50a-4381-99de-f96747b514f1-kube-proxy\") pod \"kube-proxy-4l9nn\" (UID: \"648edea0-a50a-4381-99de-f96747b514f1\") " pod="kube-system/kube-proxy-4l9nn"
	Nov 21 14:37:12 pause-738756 kubelet[1308]: I1121 14:37:12.048598    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/648edea0-a50a-4381-99de-f96747b514f1-lib-modules\") pod \"kube-proxy-4l9nn\" (UID: \"648edea0-a50a-4381-99de-f96747b514f1\") " pod="kube-system/kube-proxy-4l9nn"
	Nov 21 14:37:12 pause-738756 kubelet[1308]: I1121 14:37:12.048623    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2b5a6c88-1f8c-4d52-8efc-72da2ae7668c-cni-cfg\") pod \"kindnet-xjsdb\" (UID: \"2b5a6c88-1f8c-4d52-8efc-72da2ae7668c\") " pod="kube-system/kindnet-xjsdb"
	Nov 21 14:37:12 pause-738756 kubelet[1308]: I1121 14:37:12.048645    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b5a6c88-1f8c-4d52-8efc-72da2ae7668c-xtables-lock\") pod \"kindnet-xjsdb\" (UID: \"2b5a6c88-1f8c-4d52-8efc-72da2ae7668c\") " pod="kube-system/kindnet-xjsdb"
	Nov 21 14:37:12 pause-738756 kubelet[1308]: I1121 14:37:12.048666    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b5a6c88-1f8c-4d52-8efc-72da2ae7668c-lib-modules\") pod \"kindnet-xjsdb\" (UID: \"2b5a6c88-1f8c-4d52-8efc-72da2ae7668c\") " pod="kube-system/kindnet-xjsdb"
	Nov 21 14:37:12 pause-738756 kubelet[1308]: E1121 14:37:12.154541    1308 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 21 14:37:12 pause-738756 kubelet[1308]: E1121 14:37:12.154596    1308 projected.go:196] Error preparing data for projected volume kube-api-access-vm97l for pod kube-system/kube-proxy-4l9nn: configmap "kube-root-ca.crt" not found
	Nov 21 14:37:12 pause-738756 kubelet[1308]: E1121 14:37:12.154718    1308 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 21 14:37:12 pause-738756 kubelet[1308]: E1121 14:37:12.154739    1308 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/648edea0-a50a-4381-99de-f96747b514f1-kube-api-access-vm97l podName:648edea0-a50a-4381-99de-f96747b514f1 nodeName:}" failed. No retries permitted until 2025-11-21 14:37:12.654715808 +0000 UTC m=+6.008200521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vm97l" (UniqueName: "kubernetes.io/projected/648edea0-a50a-4381-99de-f96747b514f1-kube-api-access-vm97l") pod "kube-proxy-4l9nn" (UID: "648edea0-a50a-4381-99de-f96747b514f1") : configmap "kube-root-ca.crt" not found
	Nov 21 14:37:12 pause-738756 kubelet[1308]: E1121 14:37:12.154748    1308 projected.go:196] Error preparing data for projected volume kube-api-access-2xxbp for pod kube-system/kindnet-xjsdb: configmap "kube-root-ca.crt" not found
	Nov 21 14:37:12 pause-738756 kubelet[1308]: E1121 14:37:12.154872    1308 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b5a6c88-1f8c-4d52-8efc-72da2ae7668c-kube-api-access-2xxbp podName:2b5a6c88-1f8c-4d52-8efc-72da2ae7668c nodeName:}" failed. No retries permitted until 2025-11-21 14:37:12.654842212 +0000 UTC m=+6.008326907 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2xxbp" (UniqueName: "kubernetes.io/projected/2b5a6c88-1f8c-4d52-8efc-72da2ae7668c-kube-api-access-2xxbp") pod "kindnet-xjsdb" (UID: "2b5a6c88-1f8c-4d52-8efc-72da2ae7668c") : configmap "kube-root-ca.crt" not found
	Nov 21 14:37:13 pause-738756 kubelet[1308]: I1121 14:37:13.781768    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4l9nn" podStartSLOduration=1.7817451979999999 podStartE2EDuration="1.781745198s" podCreationTimestamp="2025-11-21 14:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:37:13.781372485 +0000 UTC m=+7.134857217" watchObservedRunningTime="2025-11-21 14:37:13.781745198 +0000 UTC m=+7.135229912"
	Nov 21 14:37:13 pause-738756 kubelet[1308]: I1121 14:37:13.782351    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-xjsdb" podStartSLOduration=1.7823341639999999 podStartE2EDuration="1.782334164s" podCreationTimestamp="2025-11-21 14:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:37:13.771720171 +0000 UTC m=+7.125204885" watchObservedRunningTime="2025-11-21 14:37:13.782334164 +0000 UTC m=+7.135818877"
	Nov 21 14:37:23 pause-738756 kubelet[1308]: I1121 14:37:23.895191    1308 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 21 14:37:23 pause-738756 kubelet[1308]: I1121 14:37:23.929056    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07e1e5d6-3bb7-4e12-8bba-fb901718b11e-config-volume\") pod \"coredns-66bc5c9577-tc86b\" (UID: \"07e1e5d6-3bb7-4e12-8bba-fb901718b11e\") " pod="kube-system/coredns-66bc5c9577-tc86b"
	Nov 21 14:37:23 pause-738756 kubelet[1308]: I1121 14:37:23.929098    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptghh\" (UniqueName: \"kubernetes.io/projected/07e1e5d6-3bb7-4e12-8bba-fb901718b11e-kube-api-access-ptghh\") pod \"coredns-66bc5c9577-tc86b\" (UID: \"07e1e5d6-3bb7-4e12-8bba-fb901718b11e\") " pod="kube-system/coredns-66bc5c9577-tc86b"
	Nov 21 14:37:24 pause-738756 kubelet[1308]: I1121 14:37:24.796454    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tc86b" podStartSLOduration=12.796433599 podStartE2EDuration="12.796433599s" podCreationTimestamp="2025-11-21 14:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:37:24.796227603 +0000 UTC m=+18.149712339" watchObservedRunningTime="2025-11-21 14:37:24.796433599 +0000 UTC m=+18.149918312"
	Nov 21 14:37:33 pause-738756 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 14:37:33 pause-738756 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 14:37:33 pause-738756 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 21 14:37:33 pause-738756 systemd[1]: kubelet.service: Consumed 1.097s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-738756 -n pause-738756
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-738756 -n pause-738756: exit status 2 (318.329032ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-738756 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (4.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-794941 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-794941 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (274.665755ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:37:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-794941 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-794941 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-794941 describe deploy/metrics-server -n kube-system: exit status 1 (67.655796ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-794941 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-794941
helpers_test.go:243: (dbg) docker inspect old-k8s-version-794941:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b81aa4f3bb48453daf4fe7a0508db9821cc8705861ed4feea1a5eeb7c75c5ce3",
	        "Created": "2025-11-21T14:37:02.714934052Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 231528,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:37:02.766153865Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/b81aa4f3bb48453daf4fe7a0508db9821cc8705861ed4feea1a5eeb7c75c5ce3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b81aa4f3bb48453daf4fe7a0508db9821cc8705861ed4feea1a5eeb7c75c5ce3/hostname",
	        "HostsPath": "/var/lib/docker/containers/b81aa4f3bb48453daf4fe7a0508db9821cc8705861ed4feea1a5eeb7c75c5ce3/hosts",
	        "LogPath": "/var/lib/docker/containers/b81aa4f3bb48453daf4fe7a0508db9821cc8705861ed4feea1a5eeb7c75c5ce3/b81aa4f3bb48453daf4fe7a0508db9821cc8705861ed4feea1a5eeb7c75c5ce3-json.log",
	        "Name": "/old-k8s-version-794941",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-794941:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-794941",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b81aa4f3bb48453daf4fe7a0508db9821cc8705861ed4feea1a5eeb7c75c5ce3",
	                "LowerDir": "/var/lib/docker/overlay2/c94fc1564c9f1d7c0d4997f74e9d5cf1f54d181439b877d5c418725371d7e094-init/diff:/var/lib/docker/overlay2/52a35d389e2d282a009a54c52e3f2c3f22a4d7ab5a2644a29d16127d21682576/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c94fc1564c9f1d7c0d4997f74e9d5cf1f54d181439b877d5c418725371d7e094/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c94fc1564c9f1d7c0d4997f74e9d5cf1f54d181439b877d5c418725371d7e094/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c94fc1564c9f1d7c0d4997f74e9d5cf1f54d181439b877d5c418725371d7e094/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-794941",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-794941/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-794941",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-794941",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-794941",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5771dff5856510fc706dc4e6200306f8a189208533ece4395115c94ea9095a8f",
	            "SandboxKey": "/var/run/docker/netns/5771dff58565",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-794941": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fcd2cf468008e08589bfa63705aa450680f6e45d22486fee930702c79b4654b7",
	                    "EndpointID": "4a1b72b3296e916b7ccdd55c68ac88d3df8c370b289af1f1179313f41b4590da",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "aa:b6:fc:70:d9:cb",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-794941",
	                        "b81aa4f3bb48"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-794941 -n old-k8s-version-794941
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-794941 logs -n 25
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ cp      │ multinode-384928 cp multinode-384928-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1537588437/001/cp-test_multinode-384928-m03.txt                                                                                             │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ multinode-384928 ssh -n multinode-384928-m03 sudo cat /home/docker/cp-test.txt                                                                                                                                                                │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ cp      │ multinode-384928 cp multinode-384928-m03:/home/docker/cp-test.txt multinode-384928:/home/docker/cp-test_multinode-384928-m03_multinode-384928.txt                                                                                             │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ multinode-384928 ssh -n multinode-384928-m03 sudo cat /home/docker/cp-test.txt                                                                                                                                                                │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ multinode-384928 ssh -n multinode-384928 sudo cat /home/docker/cp-test_multinode-384928-m03_multinode-384928.txt                                                                                                                              │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ cp      │ multinode-384928 cp multinode-384928-m03:/home/docker/cp-test.txt multinode-384928-m02:/home/docker/cp-test_multinode-384928-m03_multinode-384928-m02.txt                                                                                     │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ multinode-384928 ssh -n multinode-384928-m03 sudo cat /home/docker/cp-test.txt                                                                                                                                                                │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ multinode-384928 ssh -n multinode-384928-m02 sudo cat /home/docker/cp-test_multinode-384928-m03_multinode-384928-m02.txt                                                                                                                      │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ node    │ multinode-384928 node stop m03                                                                                                                                                                                                                │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ node    │ multinode-384928 node start m03 -v=5 --alsologtostderr                                                                                                                                                                                        │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ node    │ list -p multinode-384928                                                                                                                                                                                                                      │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ stop    │ -p multinode-384928                                                                                                                                                                                                                           │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p cert-expiration-046125 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-046125   │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ start   │ -p cert-options-116734 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ delete  │ -p force-systemd-env-653926                                                                                                                                                                                                                   │ force-systemd-env-653926 │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ start   │ -p pause-738756 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                                     │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:37 UTC │
	│ ssh     │ cert-options-116734 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ ssh     │ -p cert-options-116734 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ delete  │ -p cert-options-116734                                                                                                                                                                                                                        │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ start   │ -p old-k8s-version-794941 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:37 UTC │
	│ start   │ -p pause-738756 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:37 UTC │
	│ pause   │ -p pause-738756 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │                     │
	│ delete  │ -p pause-738756                                                                                                                                                                                                                               │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:37 UTC │
	│ start   │ -p no-preload-589411 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589411        │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-794941 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:37:40
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:37:40.051687  239114 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:37:40.051806  239114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:37:40.051818  239114 out.go:374] Setting ErrFile to fd 2...
	I1121 14:37:40.051825  239114 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:37:40.052033  239114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:37:40.052550  239114 out.go:368] Setting JSON to false
	I1121 14:37:40.053716  239114 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4809,"bootTime":1763731051,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:37:40.053801  239114 start.go:143] virtualization: kvm guest
	I1121 14:37:40.055537  239114 out.go:179] * [no-preload-589411] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:37:40.057019  239114 notify.go:221] Checking for updates...
	I1121 14:37:40.057030  239114 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:37:40.058201  239114 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:37:40.059405  239114 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:37:40.060508  239114 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 14:37:40.061550  239114 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:37:40.062654  239114 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:37:40.063958  239114 config.go:182] Loaded profile config "cert-expiration-046125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:37:40.064049  239114 config.go:182] Loaded profile config "kubernetes-upgrade-214044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:37:40.064128  239114 config.go:182] Loaded profile config "old-k8s-version-794941": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1121 14:37:40.064224  239114 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:37:40.087856  239114 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:37:40.087930  239114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:37:40.145747  239114 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:37:40.13547351 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:37:40.145845  239114 docker.go:319] overlay module found
	I1121 14:37:40.147445  239114 out.go:179] * Using the docker driver based on user configuration
	I1121 14:37:40.148492  239114 start.go:309] selected driver: docker
	I1121 14:37:40.148507  239114 start.go:930] validating driver "docker" against <nil>
	I1121 14:37:40.148518  239114 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:37:40.149057  239114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:37:40.203697  239114 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:37:40.193519172 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:37:40.203887  239114 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 14:37:40.204090  239114 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:37:40.205604  239114 out.go:179] * Using Docker driver with root privileges
	I1121 14:37:40.206863  239114 cni.go:84] Creating CNI manager for ""
	I1121 14:37:40.206939  239114 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:37:40.206953  239114 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 14:37:40.207014  239114 start.go:353] cluster config:
	{Name:no-preload-589411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:37:40.208188  239114 out.go:179] * Starting "no-preload-589411" primary control-plane node in "no-preload-589411" cluster
	I1121 14:37:40.209237  239114 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:37:40.210230  239114 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:37:40.211256  239114 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:37:40.211346  239114 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:37:40.211429  239114 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/no-preload-589411/config.json ...
	I1121 14:37:40.211469  239114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/no-preload-589411/config.json: {Name:mk01f7ccf1882192542eb0588509cc199d6184ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:37:40.211549  239114 cache.go:107] acquiring lock: {Name:mke75466844e5b5d026463813774c1f728aaddeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:37:40.211582  239114 cache.go:107] acquiring lock: {Name:mkd98d9687b2082e3f3e88c7fade59999fdecf44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:37:40.211621  239114 cache.go:107] acquiring lock: {Name:mkd096b3a3fa30971ac4cf9acc7857a7ffd9853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:37:40.211659  239114 cache.go:115] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1121 14:37:40.211671  239114 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 114.327µs
	I1121 14:37:40.211640  239114 cache.go:107] acquiring lock: {Name:mkd4a76239f4b71fdf99ac5a759cd01897368f7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:37:40.211687  239114 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1121 14:37:40.211703  239114 cache.go:107] acquiring lock: {Name:mk42daec646d706ae0683942a66a6acc7e89145d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:37:40.211710  239114 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:37:40.211710  239114 cache.go:107] acquiring lock: {Name:mke34bf7a39c66927fe2657ec23445f04ebabbb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:37:40.211724  239114 cache.go:107] acquiring lock: {Name:mk16a0f56ae9b12023a6268ab9e2e14cd775531c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:37:40.211759  239114 cache.go:107] acquiring lock: {Name:mk44ffbe1b30798f442309f17630d5f372940d03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:37:40.211875  239114 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:37:40.211897  239114 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:37:40.211919  239114 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:37:40.211928  239114 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:37:40.211959  239114 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1121 14:37:40.211974  239114 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:37:40.213058  239114 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:37:40.213139  239114 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:37:40.213058  239114 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:37:40.213062  239114 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:37:40.213059  239114 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1121 14:37:40.213059  239114 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:37:40.213301  239114 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:37:40.231679  239114 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:37:40.231695  239114 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:37:40.231712  239114 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:37:40.231735  239114 start.go:360] acquireMachinesLock for no-preload-589411: {Name:mk828f66be6805be79eae119877f5f43d8b19d75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:37:40.231819  239114 start.go:364] duration metric: took 66.637µs to acquireMachinesLock for "no-preload-589411"
	I1121 14:37:40.231846  239114 start.go:93] Provisioning new machine with config: &{Name:no-preload-589411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589411 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:37:40.231908  239114 start.go:125] createHost starting for "" (driver="docker")
	W1121 14:37:38.578531  230782 node_ready.go:57] node "old-k8s-version-794941" has "Ready":"False" status (will retry)
	W1121 14:37:41.078472  230782 node_ready.go:57] node "old-k8s-version-794941" has "Ready":"False" status (will retry)
	I1121 14:37:40.234297  239114 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 14:37:40.234494  239114 start.go:159] libmachine.API.Create for "no-preload-589411" (driver="docker")
	I1121 14:37:40.234522  239114 client.go:173] LocalClient.Create starting
	I1121 14:37:40.234597  239114 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem
	I1121 14:37:40.234627  239114 main.go:143] libmachine: Decoding PEM data...
	I1121 14:37:40.234642  239114 main.go:143] libmachine: Parsing certificate...
	I1121 14:37:40.234696  239114 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem
	I1121 14:37:40.234722  239114 main.go:143] libmachine: Decoding PEM data...
	I1121 14:37:40.234730  239114 main.go:143] libmachine: Parsing certificate...
	I1121 14:37:40.235017  239114 cli_runner.go:164] Run: docker network inspect no-preload-589411 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 14:37:40.251755  239114 cli_runner.go:211] docker network inspect no-preload-589411 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 14:37:40.251808  239114 network_create.go:284] running [docker network inspect no-preload-589411] to gather additional debugging logs...
	I1121 14:37:40.251824  239114 cli_runner.go:164] Run: docker network inspect no-preload-589411
	W1121 14:37:40.266351  239114 cli_runner.go:211] docker network inspect no-preload-589411 returned with exit code 1
	I1121 14:37:40.266376  239114 network_create.go:287] error running [docker network inspect no-preload-589411]: docker network inspect no-preload-589411: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-589411 not found
	I1121 14:37:40.266386  239114 network_create.go:289] output of [docker network inspect no-preload-589411]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-589411 not found
	
	** /stderr **
	I1121 14:37:40.266460  239114 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:37:40.283693  239114 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-28b1c9d83f01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:19:47:f8:32:b5} reservation:<nil>}
	I1121 14:37:40.284278  239114 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-701670d7ab7f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4a:ae:cd:b4:3f:5e} reservation:<nil>}
	I1121 14:37:40.284826  239114 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-753e8bd7b54d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:87:d4:c1:6c:14} reservation:<nil>}
	I1121 14:37:40.285329  239114 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3682f8572a9e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:1d:4d:e3:4f:0f} reservation:<nil>}
	I1121 14:37:40.285905  239114 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d122c0}
	I1121 14:37:40.285930  239114 network_create.go:124] attempt to create docker network no-preload-589411 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1121 14:37:40.285973  239114 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-589411 no-preload-589411
	I1121 14:37:40.332548  239114 network_create.go:108] docker network no-preload-589411 192.168.85.0/24 created
	I1121 14:37:40.332588  239114 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-589411" container
	I1121 14:37:40.332636  239114 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 14:37:40.349494  239114 cli_runner.go:164] Run: docker volume create no-preload-589411 --label name.minikube.sigs.k8s.io=no-preload-589411 --label created_by.minikube.sigs.k8s.io=true
	I1121 14:37:40.367211  239114 oci.go:103] Successfully created a docker volume no-preload-589411
	I1121 14:37:40.367278  239114 cli_runner.go:164] Run: docker run --rm --name no-preload-589411-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-589411 --entrypoint /usr/bin/test -v no-preload-589411:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 14:37:40.390942  239114 cache.go:162] opening:  /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1121 14:37:40.401539  239114 cache.go:162] opening:  /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1121 14:37:40.410653  239114 cache.go:162] opening:  /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1121 14:37:40.417529  239114 cache.go:162] opening:  /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1121 14:37:40.427338  239114 cache.go:162] opening:  /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1121 14:37:40.434468  239114 cache.go:162] opening:  /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1121 14:37:40.508354  239114 cache.go:162] opening:  /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1121 14:37:40.540671  239114 cache.go:157] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1121 14:37:40.540693  239114 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 328.999385ms
	I1121 14:37:40.540704  239114 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1121 14:37:40.769026  239114 oci.go:107] Successfully prepared a docker volume no-preload-589411
	I1121 14:37:40.769104  239114 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1121 14:37:40.769176  239114 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1121 14:37:40.769215  239114 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1121 14:37:40.769256  239114 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 14:37:40.832395  239114 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-589411 --name no-preload-589411 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-589411 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-589411 --network no-preload-589411 --ip 192.168.85.2 --volume no-preload-589411:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 14:37:40.842445  239114 cache.go:157] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1121 14:37:40.842469  239114 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 630.761126ms
	I1121 14:37:40.842481  239114 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1121 14:37:41.169497  239114 cli_runner.go:164] Run: docker container inspect no-preload-589411 --format={{.State.Running}}
	I1121 14:37:41.188094  239114 cli_runner.go:164] Run: docker container inspect no-preload-589411 --format={{.State.Status}}
	I1121 14:37:41.206155  239114 cli_runner.go:164] Run: docker exec no-preload-589411 stat /var/lib/dpkg/alternatives/iptables
	I1121 14:37:41.251280  239114 oci.go:144] the created container "no-preload-589411" has a running status.
	I1121 14:37:41.251306  239114 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-11045/.minikube/machines/no-preload-589411/id_rsa...
	I1121 14:37:41.556954  239114 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-11045/.minikube/machines/no-preload-589411/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:37:41.594523  239114 cli_runner.go:164] Run: docker container inspect no-preload-589411 --format={{.State.Status}}
	I1121 14:37:41.616940  239114 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:37:41.616974  239114 kic_runner.go:114] Args: [docker exec --privileged no-preload-589411 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:37:41.671650  239114 cli_runner.go:164] Run: docker container inspect no-preload-589411 --format={{.State.Status}}
	I1121 14:37:41.696020  239114 machine.go:94] provisionDockerMachine start ...
	I1121 14:37:41.696136  239114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589411
	I1121 14:37:41.719689  239114 main.go:143] libmachine: Using SSH client type: native
	I1121 14:37:41.720032  239114 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33059 <nil> <nil>}
	I1121 14:37:41.720053  239114 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:37:41.789141  239114 cache.go:157] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1121 14:37:41.789171  239114 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.577468125s
	I1121 14:37:41.789185  239114 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1121 14:37:41.813962  239114 cache.go:157] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1121 14:37:41.813989  239114 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.602457705s
	I1121 14:37:41.814005  239114 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1121 14:37:41.851140  239114 cache.go:157] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1121 14:37:41.851307  239114 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.639621853s
	I1121 14:37:41.851352  239114 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1121 14:37:41.868816  239114 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-589411
	
	I1121 14:37:41.868844  239114 ubuntu.go:182] provisioning hostname "no-preload-589411"
	I1121 14:37:41.868910  239114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589411
	I1121 14:37:41.889334  239114 main.go:143] libmachine: Using SSH client type: native
	I1121 14:37:41.889713  239114 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33059 <nil> <nil>}
	I1121 14:37:41.889735  239114 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-589411 && echo "no-preload-589411" | sudo tee /etc/hostname
	I1121 14:37:41.924873  239114 cache.go:157] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1121 14:37:41.924903  239114 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.713322342s
	I1121 14:37:41.924913  239114 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1121 14:37:42.044912  239114 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-589411
	
	I1121 14:37:42.044990  239114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589411
	I1121 14:37:42.063491  239114 main.go:143] libmachine: Using SSH client type: native
	I1121 14:37:42.063754  239114 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33059 <nil> <nil>}
	I1121 14:37:42.063773  239114 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-589411' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-589411/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-589411' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:37:42.184655  239114 cache.go:157] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1121 14:37:42.184679  239114 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 1.973059632s
	I1121 14:37:42.184691  239114 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1121 14:37:42.184706  239114 cache.go:87] Successfully saved all images to host disk.
	I1121 14:37:42.193872  239114 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:37:42.193898  239114 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11045/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11045/.minikube}
	I1121 14:37:42.193926  239114 ubuntu.go:190] setting up certificates
	I1121 14:37:42.193943  239114 provision.go:84] configureAuth start
	I1121 14:37:42.193999  239114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-589411
	I1121 14:37:42.211640  239114 provision.go:143] copyHostCerts
	I1121 14:37:42.211697  239114 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem, removing ...
	I1121 14:37:42.211706  239114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem
	I1121 14:37:42.211778  239114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem (1078 bytes)
	I1121 14:37:42.211880  239114 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem, removing ...
	I1121 14:37:42.211893  239114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem
	I1121 14:37:42.211930  239114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem (1123 bytes)
	I1121 14:37:42.212023  239114 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem, removing ...
	I1121 14:37:42.212034  239114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem
	I1121 14:37:42.212074  239114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem (1679 bytes)
	I1121 14:37:42.212150  239114 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem org=jenkins.no-preload-589411 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-589411]
	I1121 14:37:42.480135  239114 provision.go:177] copyRemoteCerts
	I1121 14:37:42.480199  239114 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:37:42.480242  239114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589411
	I1121 14:37:42.497728  239114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33059 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/no-preload-589411/id_rsa Username:docker}
	I1121 14:37:42.591248  239114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1121 14:37:42.609070  239114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 14:37:42.625386  239114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:37:42.641159  239114 provision.go:87] duration metric: took 447.205226ms to configureAuth
	I1121 14:37:42.641181  239114 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:37:42.641332  239114 config.go:182] Loaded profile config "no-preload-589411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:37:42.641436  239114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589411
	I1121 14:37:42.659405  239114 main.go:143] libmachine: Using SSH client type: native
	I1121 14:37:42.659635  239114 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33059 <nil> <nil>}
	I1121 14:37:42.659661  239114 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:37:42.914989  239114 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:37:42.915012  239114 machine.go:97] duration metric: took 1.218963321s to provisionDockerMachine
	I1121 14:37:42.915022  239114 client.go:176] duration metric: took 2.680494523s to LocalClient.Create
	I1121 14:37:42.915040  239114 start.go:167] duration metric: took 2.68054513s to libmachine.API.Create "no-preload-589411"
	I1121 14:37:42.915048  239114 start.go:293] postStartSetup for "no-preload-589411" (driver="docker")
	I1121 14:37:42.915061  239114 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:37:42.915121  239114 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:37:42.915166  239114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589411
	I1121 14:37:42.932266  239114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33059 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/no-preload-589411/id_rsa Username:docker}
	I1121 14:37:43.027853  239114 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:37:43.031199  239114 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:37:43.031234  239114 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:37:43.031244  239114 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/addons for local assets ...
	I1121 14:37:43.031299  239114 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/files for local assets ...
	I1121 14:37:43.031391  239114 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem -> 145422.pem in /etc/ssl/certs
	I1121 14:37:43.031518  239114 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:37:43.038904  239114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:37:43.057776  239114 start.go:296] duration metric: took 142.71624ms for postStartSetup
	I1121 14:37:43.058103  239114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-589411
	I1121 14:37:43.075113  239114 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/no-preload-589411/config.json ...
	I1121 14:37:43.075375  239114 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:37:43.075429  239114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589411
	I1121 14:37:43.092743  239114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33059 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/no-preload-589411/id_rsa Username:docker}
	I1121 14:37:43.183412  239114 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:37:43.187645  239114 start.go:128] duration metric: took 2.955725258s to createHost
	I1121 14:37:43.187664  239114 start.go:83] releasing machines lock for "no-preload-589411", held for 2.955832947s
	I1121 14:37:43.187718  239114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-589411
	I1121 14:37:43.204885  239114 ssh_runner.go:195] Run: cat /version.json
	I1121 14:37:43.204924  239114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589411
	I1121 14:37:43.204973  239114 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:37:43.205026  239114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589411
	I1121 14:37:43.222767  239114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33059 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/no-preload-589411/id_rsa Username:docker}
	I1121 14:37:43.223095  239114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33059 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/no-preload-589411/id_rsa Username:docker}
	I1121 14:37:43.312020  239114 ssh_runner.go:195] Run: systemctl --version
	I1121 14:37:43.401289  239114 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:37:43.433243  239114 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:37:43.437517  239114 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:37:43.437590  239114 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:37:43.460924  239114 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 14:37:43.460941  239114 start.go:496] detecting cgroup driver to use...
	I1121 14:37:43.460967  239114 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:37:43.461012  239114 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:37:43.476195  239114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:37:43.487618  239114 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:37:43.487658  239114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:37:43.502740  239114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:37:43.518470  239114 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:37:43.598344  239114 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:37:43.681394  239114 docker.go:234] disabling docker service ...
	I1121 14:37:43.681456  239114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:37:43.699286  239114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:37:43.710517  239114 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:37:43.791863  239114 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:37:43.870428  239114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:37:43.881592  239114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:37:43.895033  239114 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 14:37:43.895085  239114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:37:43.904060  239114 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1121 14:37:43.904113  239114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:37:43.912099  239114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:37:43.919943  239114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:37:43.927931  239114 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:37:43.935122  239114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:37:43.943196  239114 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:37:43.956021  239114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:37:43.964028  239114 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:37:43.970704  239114 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:37:43.977269  239114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:37:44.054736  239114 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:37:44.532943  239114 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:37:44.533012  239114 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:37:44.536813  239114 start.go:564] Will wait 60s for crictl version
	I1121 14:37:44.536870  239114 ssh_runner.go:195] Run: which crictl
	I1121 14:37:44.540184  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:37:44.563586  239114 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:37:44.563649  239114 ssh_runner.go:195] Run: crio --version
	I1121 14:37:44.590114  239114 ssh_runner.go:195] Run: crio --version
	I1121 14:37:44.618046  239114 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 14:37:44.619133  239114 cli_runner.go:164] Run: docker network inspect no-preload-589411 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:37:44.636261  239114 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:37:44.640172  239114 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:37:44.649994  239114 kubeadm.go:884] updating cluster {Name:no-preload-589411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:37:44.650100  239114 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:37:44.650134  239114 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:37:44.673175  239114 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1121 14:37:44.673195  239114 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1121 14:37:44.673248  239114 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:37:44.673270  239114 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:37:44.673279  239114 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:37:44.673290  239114 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:37:44.673252  239114 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:37:44.673316  239114 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:37:44.673305  239114 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1121 14:37:44.673346  239114 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:37:44.674387  239114 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1121 14:37:44.674410  239114 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:37:44.674415  239114 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:37:44.674424  239114 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:37:44.674387  239114 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:37:44.674391  239114 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:37:44.674391  239114 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:37:44.674462  239114 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:37:44.808088  239114 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:37:44.824713  239114 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:37:44.825942  239114 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1121 14:37:44.828017  239114 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:37:44.842615  239114 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1121 14:37:44.844144  239114 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1121 14:37:44.844185  239114 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:37:44.844227  239114 ssh_runner.go:195] Run: which crictl
	I1121 14:37:44.850360  239114 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:37:44.866159  239114 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:37:44.869901  239114 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1121 14:37:44.869942  239114 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1121 14:37:44.869968  239114 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1121 14:37:44.869981  239114 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1121 14:37:44.869994  239114 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:37:44.870021  239114 ssh_runner.go:195] Run: which crictl
	I1121 14:37:44.869944  239114 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:37:44.870039  239114 ssh_runner.go:195] Run: which crictl
	I1121 14:37:44.870082  239114 ssh_runner.go:195] Run: which crictl
	I1121 14:37:44.883693  239114 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1121 14:37:44.883737  239114 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1121 14:37:44.883752  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:37:44.883776  239114 ssh_runner.go:195] Run: which crictl
	I1121 14:37:44.889710  239114 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1121 14:37:44.889756  239114 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:37:44.889799  239114 ssh_runner.go:195] Run: which crictl
	I1121 14:37:44.901918  239114 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1121 14:37:44.901958  239114 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:37:44.902007  239114 ssh_runner.go:195] Run: which crictl
	I1121 14:37:44.902024  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:37:44.902055  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:37:44.902093  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:37:44.910279  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:37:44.910279  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:37:44.910309  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:37:44.933488  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:37:44.934135  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:37:44.963391  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:37:44.963432  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:37:44.963436  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1121 14:37:44.963391  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:37:44.963479  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:37:44.963497  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1121 14:37:44.963520  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:37:45.000319  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1121 14:37:45.000362  239114 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1121 14:37:45.000406  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1121 14:37:45.000447  239114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:37:45.000456  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1121 14:37:45.000483  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1121 14:37:45.000505  239114 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1121 14:37:45.000535  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1121 14:37:45.000596  239114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:37:45.030201  239114 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1121 14:37:45.030302  239114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:37:45.033633  239114 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1121 14:37:45.033652  239114 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1121 14:37:45.033673  239114 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1121 14:37:45.033701  239114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1121 14:37:45.033721  239114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:37:45.033740  239114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:37:45.033767  239114 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1121 14:37:45.033840  239114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1121 14:37:45.036734  239114 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1121 14:37:45.036774  239114 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1121 14:37:45.036798  239114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1121 14:37:45.036805  239114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:37:45.036849  239114 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1121 14:37:45.036861  239114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1121 14:37:45.047765  239114 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1121 14:37:45.047786  239114 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1121 14:37:45.047796  239114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1121 14:37:45.047808  239114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1121 14:37:45.047824  239114 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1121 14:37:45.047862  239114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	W1121 14:37:43.578038  230782 node_ready.go:57] node "old-k8s-version-794941" has "Ready":"False" status (will retry)
	I1121 14:37:45.579169  230782 node_ready.go:49] node "old-k8s-version-794941" is "Ready"
	I1121 14:37:45.579202  230782 node_ready.go:38] duration metric: took 13.504024793s for node "old-k8s-version-794941" to be "Ready" ...
	I1121 14:37:45.579221  230782 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:37:45.579276  230782 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:37:45.595699  230782 api_server.go:72] duration metric: took 13.822396328s to wait for apiserver process to appear ...
	I1121 14:37:45.595757  230782 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:37:45.595784  230782 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1121 14:37:45.603626  230782 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1121 14:37:45.604962  230782 api_server.go:141] control plane version: v1.28.0
	I1121 14:37:45.604992  230782 api_server.go:131] duration metric: took 9.226114ms to wait for apiserver health ...
	I1121 14:37:45.605002  230782 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:37:45.609040  230782 system_pods.go:59] 8 kube-system pods found
	I1121 14:37:45.609079  230782 system_pods.go:61] "coredns-5dd5756b68-h4xjd" [5c7fd9b1-424a-4401-932f-775af443b1b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:37:45.609088  230782 system_pods.go:61] "etcd-old-k8s-version-794941" [cc92a3b3-3d27-4b3a-806b-a99049546a7d] Running
	I1121 14:37:45.609097  230782 system_pods.go:61] "kindnet-9pjsf" [2c2633f4-bb39-4747-b0c8-c39c76f724cb] Running
	I1121 14:37:45.609104  230782 system_pods.go:61] "kube-apiserver-old-k8s-version-794941" [71dbc4a7-b490-4c73-9e72-0f7fa6d37fca] Running
	I1121 14:37:45.609111  230782 system_pods.go:61] "kube-controller-manager-old-k8s-version-794941" [ea18a1a2-fdd4-49b0-bf69-5fd9e0fe1d14] Running
	I1121 14:37:45.609120  230782 system_pods.go:61] "kube-proxy-w4rcg" [1ddd5037-510d-4bcf-b7d8-a61b6f2019e2] Running
	I1121 14:37:45.609125  230782 system_pods.go:61] "kube-scheduler-old-k8s-version-794941" [cf5d7654-5d04-4181-9499-0797085f748c] Running
	I1121 14:37:45.609138  230782 system_pods.go:61] "storage-provisioner" [e6cb0d18-f24f-4347-aa16-705c736303b1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:37:45.609145  230782 system_pods.go:74] duration metric: took 4.136922ms to wait for pod list to return data ...
	I1121 14:37:45.609158  230782 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:37:45.611431  230782 default_sa.go:45] found service account: "default"
	I1121 14:37:45.611453  230782 default_sa.go:55] duration metric: took 2.288085ms for default service account to be created ...
	I1121 14:37:45.611464  230782 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:37:45.615898  230782 system_pods.go:86] 8 kube-system pods found
	I1121 14:37:45.615932  230782 system_pods.go:89] "coredns-5dd5756b68-h4xjd" [5c7fd9b1-424a-4401-932f-775af443b1b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:37:45.615941  230782 system_pods.go:89] "etcd-old-k8s-version-794941" [cc92a3b3-3d27-4b3a-806b-a99049546a7d] Running
	I1121 14:37:45.615951  230782 system_pods.go:89] "kindnet-9pjsf" [2c2633f4-bb39-4747-b0c8-c39c76f724cb] Running
	I1121 14:37:45.615957  230782 system_pods.go:89] "kube-apiserver-old-k8s-version-794941" [71dbc4a7-b490-4c73-9e72-0f7fa6d37fca] Running
	I1121 14:37:45.615965  230782 system_pods.go:89] "kube-controller-manager-old-k8s-version-794941" [ea18a1a2-fdd4-49b0-bf69-5fd9e0fe1d14] Running
	I1121 14:37:45.615969  230782 system_pods.go:89] "kube-proxy-w4rcg" [1ddd5037-510d-4bcf-b7d8-a61b6f2019e2] Running
	I1121 14:37:45.615975  230782 system_pods.go:89] "kube-scheduler-old-k8s-version-794941" [cf5d7654-5d04-4181-9499-0797085f748c] Running
	I1121 14:37:45.615984  230782 system_pods.go:89] "storage-provisioner" [e6cb0d18-f24f-4347-aa16-705c736303b1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:37:45.616007  230782 retry.go:31] will retry after 256.953762ms: missing components: kube-dns
	I1121 14:37:45.877799  230782 system_pods.go:86] 8 kube-system pods found
	I1121 14:37:45.877831  230782 system_pods.go:89] "coredns-5dd5756b68-h4xjd" [5c7fd9b1-424a-4401-932f-775af443b1b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:37:45.877839  230782 system_pods.go:89] "etcd-old-k8s-version-794941" [cc92a3b3-3d27-4b3a-806b-a99049546a7d] Running
	I1121 14:37:45.877846  230782 system_pods.go:89] "kindnet-9pjsf" [2c2633f4-bb39-4747-b0c8-c39c76f724cb] Running
	I1121 14:37:45.877851  230782 system_pods.go:89] "kube-apiserver-old-k8s-version-794941" [71dbc4a7-b490-4c73-9e72-0f7fa6d37fca] Running
	I1121 14:37:45.877857  230782 system_pods.go:89] "kube-controller-manager-old-k8s-version-794941" [ea18a1a2-fdd4-49b0-bf69-5fd9e0fe1d14] Running
	I1121 14:37:45.877862  230782 system_pods.go:89] "kube-proxy-w4rcg" [1ddd5037-510d-4bcf-b7d8-a61b6f2019e2] Running
	I1121 14:37:45.877866  230782 system_pods.go:89] "kube-scheduler-old-k8s-version-794941" [cf5d7654-5d04-4181-9499-0797085f748c] Running
	I1121 14:37:45.877874  230782 system_pods.go:89] "storage-provisioner" [e6cb0d18-f24f-4347-aa16-705c736303b1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:37:45.877891  230782 retry.go:31] will retry after 286.446358ms: missing components: kube-dns
	I1121 14:37:46.170183  230782 system_pods.go:86] 8 kube-system pods found
	I1121 14:37:46.170218  230782 system_pods.go:89] "coredns-5dd5756b68-h4xjd" [5c7fd9b1-424a-4401-932f-775af443b1b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:37:46.170224  230782 system_pods.go:89] "etcd-old-k8s-version-794941" [cc92a3b3-3d27-4b3a-806b-a99049546a7d] Running
	I1121 14:37:46.170230  230782 system_pods.go:89] "kindnet-9pjsf" [2c2633f4-bb39-4747-b0c8-c39c76f724cb] Running
	I1121 14:37:46.170234  230782 system_pods.go:89] "kube-apiserver-old-k8s-version-794941" [71dbc4a7-b490-4c73-9e72-0f7fa6d37fca] Running
	I1121 14:37:46.170238  230782 system_pods.go:89] "kube-controller-manager-old-k8s-version-794941" [ea18a1a2-fdd4-49b0-bf69-5fd9e0fe1d14] Running
	I1121 14:37:46.170242  230782 system_pods.go:89] "kube-proxy-w4rcg" [1ddd5037-510d-4bcf-b7d8-a61b6f2019e2] Running
	I1121 14:37:46.170245  230782 system_pods.go:89] "kube-scheduler-old-k8s-version-794941" [cf5d7654-5d04-4181-9499-0797085f748c] Running
	I1121 14:37:46.170250  230782 system_pods.go:89] "storage-provisioner" [e6cb0d18-f24f-4347-aa16-705c736303b1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:37:46.170265  230782 retry.go:31] will retry after 295.979231ms: missing components: kube-dns
	I1121 14:37:46.470602  230782 system_pods.go:86] 8 kube-system pods found
	I1121 14:37:46.470632  230782 system_pods.go:89] "coredns-5dd5756b68-h4xjd" [5c7fd9b1-424a-4401-932f-775af443b1b0] Running
	I1121 14:37:46.470640  230782 system_pods.go:89] "etcd-old-k8s-version-794941" [cc92a3b3-3d27-4b3a-806b-a99049546a7d] Running
	I1121 14:37:46.470646  230782 system_pods.go:89] "kindnet-9pjsf" [2c2633f4-bb39-4747-b0c8-c39c76f724cb] Running
	I1121 14:37:46.470652  230782 system_pods.go:89] "kube-apiserver-old-k8s-version-794941" [71dbc4a7-b490-4c73-9e72-0f7fa6d37fca] Running
	I1121 14:37:46.470658  230782 system_pods.go:89] "kube-controller-manager-old-k8s-version-794941" [ea18a1a2-fdd4-49b0-bf69-5fd9e0fe1d14] Running
	I1121 14:37:46.470664  230782 system_pods.go:89] "kube-proxy-w4rcg" [1ddd5037-510d-4bcf-b7d8-a61b6f2019e2] Running
	I1121 14:37:46.470694  230782 system_pods.go:89] "kube-scheduler-old-k8s-version-794941" [cf5d7654-5d04-4181-9499-0797085f748c] Running
	I1121 14:37:46.470700  230782 system_pods.go:89] "storage-provisioner" [e6cb0d18-f24f-4347-aa16-705c736303b1] Running
	I1121 14:37:46.470714  230782 system_pods.go:126] duration metric: took 859.241517ms to wait for k8s-apps to be running ...
	I1121 14:37:46.470726  230782 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:37:46.470770  230782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:37:46.484143  230782 system_svc.go:56] duration metric: took 13.407914ms WaitForService to wait for kubelet
	I1121 14:37:46.484170  230782 kubeadm.go:587] duration metric: took 14.710907647s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:37:46.484192  230782 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:37:46.487282  230782 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:37:46.487322  230782 node_conditions.go:123] node cpu capacity is 8
	I1121 14:37:46.487340  230782 node_conditions.go:105] duration metric: took 3.141886ms to run NodePressure ...
	I1121 14:37:46.487354  230782 start.go:242] waiting for startup goroutines ...
	I1121 14:37:46.487365  230782 start.go:247] waiting for cluster config update ...
	I1121 14:37:46.487386  230782 start.go:256] writing updated cluster config ...
	I1121 14:37:46.487676  230782 ssh_runner.go:195] Run: rm -f paused
	I1121 14:37:46.491353  230782 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:37:46.495625  230782 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-h4xjd" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:46.500345  230782 pod_ready.go:94] pod "coredns-5dd5756b68-h4xjd" is "Ready"
	I1121 14:37:46.500368  230782 pod_ready.go:86] duration metric: took 4.723559ms for pod "coredns-5dd5756b68-h4xjd" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:46.503002  230782 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-794941" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:46.507521  230782 pod_ready.go:94] pod "etcd-old-k8s-version-794941" is "Ready"
	I1121 14:37:46.507541  230782 pod_ready.go:86] duration metric: took 4.51989ms for pod "etcd-old-k8s-version-794941" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:46.510311  230782 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-794941" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:46.514013  230782 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-794941" is "Ready"
	I1121 14:37:46.514033  230782 pod_ready.go:86] duration metric: took 3.706279ms for pod "kube-apiserver-old-k8s-version-794941" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:46.516678  230782 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-794941" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:46.895399  230782 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-794941" is "Ready"
	I1121 14:37:46.895428  230782 pod_ready.go:86] duration metric: took 378.731297ms for pod "kube-controller-manager-old-k8s-version-794941" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:47.096006  230782 pod_ready.go:83] waiting for pod "kube-proxy-w4rcg" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:47.495851  230782 pod_ready.go:94] pod "kube-proxy-w4rcg" is "Ready"
	I1121 14:37:47.495877  230782 pod_ready.go:86] duration metric: took 399.848109ms for pod "kube-proxy-w4rcg" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:47.696542  230782 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-794941" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:48.095904  230782 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-794941" is "Ready"
	I1121 14:37:48.095932  230782 pod_ready.go:86] duration metric: took 399.35158ms for pod "kube-scheduler-old-k8s-version-794941" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:37:48.095944  230782 pod_ready.go:40] duration metric: took 1.604560174s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:37:48.147344  230782 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1121 14:37:48.148849  230782 out.go:203] 
	W1121 14:37:48.149904  230782 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1121 14:37:48.150939  230782 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1121 14:37:48.152092  230782 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-794941" cluster and "default" namespace by default
	I1121 14:37:44.659770  202147 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.057340609s)
	W1121 14:37:44.659812  202147 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1121 14:37:47.160626  202147 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:37:45.052147  239114 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1121 14:37:45.052172  239114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1121 14:37:45.120620  239114 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:37:45.199243  239114 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1121 14:37:45.199320  239114 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1121 14:37:45.223890  239114 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1121 14:37:45.223937  239114 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:37:45.223994  239114 ssh_runner.go:195] Run: which crictl
	I1121 14:37:45.606392  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:37:45.606594  239114 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1121 14:37:45.606628  239114 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:37:45.606689  239114 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1121 14:37:45.644367  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:37:46.734933  239114 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.128219645s)
	I1121 14:37:46.734966  239114 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1121 14:37:46.734977  239114 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.090578893s)
	I1121 14:37:46.734993  239114 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:37:46.735039  239114 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:37:46.735040  239114 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1121 14:37:47.852742  239114 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.117669066s)
	I1121 14:37:47.852768  239114 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.117623216s)
	I1121 14:37:47.852791  239114 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1121 14:37:47.852792  239114 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1121 14:37:47.852821  239114 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:37:47.852860  239114 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1121 14:37:47.852863  239114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:37:49.232396  239114 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.379510687s)
	I1121 14:37:49.232437  239114 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1121 14:37:49.232458  239114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1121 14:37:49.232699  239114 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.379814954s)
	I1121 14:37:49.232731  239114 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1121 14:37:49.232753  239114 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:37:49.232804  239114 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1121 14:37:48.959136  202147 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:52556->192.168.76.2:8443: read: connection reset by peer
	I1121 14:37:48.959213  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:37:48.959269  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:37:48.987388  202147 cri.go:89] found id: "92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8"
	I1121 14:37:48.987407  202147 cri.go:89] found id: "5a48743f053dd94ea1c19f18ed0d8b7f694b814d093ea0495600ae2b8d09e855"
	I1121 14:37:48.987413  202147 cri.go:89] found id: ""
	I1121 14:37:48.987423  202147 logs.go:282] 2 containers: [92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8 5a48743f053dd94ea1c19f18ed0d8b7f694b814d093ea0495600ae2b8d09e855]
	I1121 14:37:48.987487  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:37:48.991389  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:37:48.994936  202147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:37:48.994996  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:37:49.022062  202147 cri.go:89] found id: ""
	I1121 14:37:49.022087  202147 logs.go:282] 0 containers: []
	W1121 14:37:49.022096  202147 logs.go:284] No container was found matching "etcd"
	I1121 14:37:49.022103  202147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:37:49.022156  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:37:49.052507  202147 cri.go:89] found id: ""
	I1121 14:37:49.052535  202147 logs.go:282] 0 containers: []
	W1121 14:37:49.052545  202147 logs.go:284] No container was found matching "coredns"
	I1121 14:37:49.052552  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:37:49.052626  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:37:49.083417  202147 cri.go:89] found id: "3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:37:49.083443  202147 cri.go:89] found id: ""
	I1121 14:37:49.083453  202147 logs.go:282] 1 containers: [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980]
	I1121 14:37:49.083504  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:37:49.088370  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:37:49.088433  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:37:49.118521  202147 cri.go:89] found id: ""
	I1121 14:37:49.118548  202147 logs.go:282] 0 containers: []
	W1121 14:37:49.118637  202147 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:37:49.118656  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:37:49.118720  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:37:49.149131  202147 cri.go:89] found id: "830f4be50e039d8a018c60ae0e2cbe357acdd591451f1ac139867871d1d67291"
	I1121 14:37:49.149153  202147 cri.go:89] found id: ""
	I1121 14:37:49.149163  202147 logs.go:282] 1 containers: [830f4be50e039d8a018c60ae0e2cbe357acdd591451f1ac139867871d1d67291]
	I1121 14:37:49.149215  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:37:49.153825  202147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:37:49.153884  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:37:49.184480  202147 cri.go:89] found id: ""
	I1121 14:37:49.184502  202147 logs.go:282] 0 containers: []
	W1121 14:37:49.184510  202147 logs.go:284] No container was found matching "kindnet"
	I1121 14:37:49.184516  202147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:37:49.184592  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:37:49.216671  202147 cri.go:89] found id: ""
	I1121 14:37:49.216697  202147 logs.go:282] 0 containers: []
	W1121 14:37:49.216707  202147 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:37:49.216730  202147 logs.go:123] Gathering logs for kubelet ...
	I1121 14:37:49.216743  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:37:49.325275  202147 logs.go:123] Gathering logs for dmesg ...
	I1121 14:37:49.325314  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:37:49.341356  202147 logs.go:123] Gathering logs for kube-apiserver [92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8] ...
	I1121 14:37:49.341380  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8"
	I1121 14:37:49.377284  202147 logs.go:123] Gathering logs for kube-apiserver [5a48743f053dd94ea1c19f18ed0d8b7f694b814d093ea0495600ae2b8d09e855] ...
	I1121 14:37:49.377317  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5a48743f053dd94ea1c19f18ed0d8b7f694b814d093ea0495600ae2b8d09e855"
	I1121 14:37:49.417281  202147 logs.go:123] Gathering logs for kube-scheduler [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980] ...
	I1121 14:37:49.417316  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:37:49.474819  202147 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:37:49.474848  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:37:49.521355  202147 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:37:49.521383  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:37:49.587129  202147 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:37:49.587154  202147 logs.go:123] Gathering logs for kube-controller-manager [830f4be50e039d8a018c60ae0e2cbe357acdd591451f1ac139867871d1d67291] ...
	I1121 14:37:49.587168  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 830f4be50e039d8a018c60ae0e2cbe357acdd591451f1ac139867871d1d67291"
	I1121 14:37:49.615034  202147 logs.go:123] Gathering logs for container status ...
	I1121 14:37:49.615056  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:37:52.146245  202147 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:37:52.146725  202147 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:37:52.146782  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:37:52.146841  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:37:52.175017  202147 cri.go:89] found id: "92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8"
	I1121 14:37:52.175040  202147 cri.go:89] found id: ""
	I1121 14:37:52.175050  202147 logs.go:282] 1 containers: [92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8]
	I1121 14:37:52.175102  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:37:52.178637  202147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:37:52.178685  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:37:52.204548  202147 cri.go:89] found id: ""
	I1121 14:37:52.204592  202147 logs.go:282] 0 containers: []
	W1121 14:37:52.204610  202147 logs.go:284] No container was found matching "etcd"
	I1121 14:37:52.204622  202147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:37:52.204675  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:37:52.229841  202147 cri.go:89] found id: ""
	I1121 14:37:52.229863  202147 logs.go:282] 0 containers: []
	W1121 14:37:52.229872  202147 logs.go:284] No container was found matching "coredns"
	I1121 14:37:52.229879  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:37:52.229928  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:37:52.256402  202147 cri.go:89] found id: "3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:37:52.256422  202147 cri.go:89] found id: ""
	I1121 14:37:52.256431  202147 logs.go:282] 1 containers: [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980]
	I1121 14:37:52.256478  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:37:52.260081  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:37:52.260134  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:37:52.285260  202147 cri.go:89] found id: ""
	I1121 14:37:52.285279  202147 logs.go:282] 0 containers: []
	W1121 14:37:52.285287  202147 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:37:52.285293  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:37:52.285353  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:37:52.310226  202147 cri.go:89] found id: "aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0"
	I1121 14:37:52.310247  202147 cri.go:89] found id: "830f4be50e039d8a018c60ae0e2cbe357acdd591451f1ac139867871d1d67291"
	I1121 14:37:52.310253  202147 cri.go:89] found id: ""
	I1121 14:37:52.310263  202147 logs.go:282] 2 containers: [aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0 830f4be50e039d8a018c60ae0e2cbe357acdd591451f1ac139867871d1d67291]
	I1121 14:37:52.310318  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:37:52.314203  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:37:52.317603  202147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:37:52.317650  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:37:52.342097  202147 cri.go:89] found id: ""
	I1121 14:37:52.342117  202147 logs.go:282] 0 containers: []
	W1121 14:37:52.342125  202147 logs.go:284] No container was found matching "kindnet"
	I1121 14:37:52.342132  202147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:37:52.342189  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:37:52.367248  202147 cri.go:89] found id: ""
	I1121 14:37:52.367266  202147 logs.go:282] 0 containers: []
	W1121 14:37:52.367276  202147 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:37:52.367293  202147 logs.go:123] Gathering logs for kube-apiserver [92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8] ...
	I1121 14:37:52.367314  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8"
	I1121 14:37:52.399576  202147 logs.go:123] Gathering logs for kube-scheduler [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980] ...
	I1121 14:37:52.399605  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:37:52.447609  202147 logs.go:123] Gathering logs for kube-controller-manager [aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0] ...
	I1121 14:37:52.447634  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0"
	I1121 14:37:52.474090  202147 logs.go:123] Gathering logs for kube-controller-manager [830f4be50e039d8a018c60ae0e2cbe357acdd591451f1ac139867871d1d67291] ...
	I1121 14:37:52.474116  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 830f4be50e039d8a018c60ae0e2cbe357acdd591451f1ac139867871d1d67291"
	I1121 14:37:52.498821  202147 logs.go:123] Gathering logs for dmesg ...
	I1121 14:37:52.498848  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:37:52.512683  202147 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:37:52.512708  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:37:52.567593  202147 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:37:52.567615  202147 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:37:52.567629  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:37:52.614048  202147 logs.go:123] Gathering logs for container status ...
	I1121 14:37:52.614075  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:37:52.642883  202147 logs.go:123] Gathering logs for kubelet ...
	I1121 14:37:52.642905  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:37:50.549976  239114 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.317147922s)
	I1121 14:37:50.550004  239114 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1121 14:37:50.550030  239114 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:37:50.550074  239114 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1121 14:37:51.578055  239114 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.027958727s)
	I1121 14:37:51.578079  239114 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1121 14:37:51.578106  239114 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:37:51.578143  239114 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1121 14:37:54.971145  239114 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.392975657s)
	I1121 14:37:54.971173  239114 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1121 14:37:54.971196  239114 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1121 14:37:54.971238  239114 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	
	
	==> CRI-O <==
	Nov 21 14:37:45 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:45.884754378Z" level=info msg="Starting container: e9f9f2a3522e7998c4018abf1941ba20f17ccfca34063bc934d9481bdee69891" id=c5ea27db-1df6-40a4-bb2b-dc7e40bd9ec8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:37:45 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:45.886551Z" level=info msg="Started container" PID=2149 containerID=e9f9f2a3522e7998c4018abf1941ba20f17ccfca34063bc934d9481bdee69891 description=kube-system/coredns-5dd5756b68-h4xjd/coredns id=c5ea27db-1df6-40a4-bb2b-dc7e40bd9ec8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9e6e5cbcb1ce293d5a9fbd4a13f0ba0d03e63f112cc07e0cf9c83a0b5be4ddeb
	Nov 21 14:37:48 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:48.589222054Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ed50ff39-67c9-4b63-a4d9-03526bbae606 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:37:48 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:48.589300108Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:37:48 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:48.595081978Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5c9c44b35151618582af8a5ea0cffc00faa0959c0e3d3b71f3eda5bf1bf1391a UID:d07a0f79-8b73-4999-a3a1-654a71184bf3 NetNS:/var/run/netns/ca2cbbf7-8e19-4d7f-abf2-c68cd7bff193 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004c87d0}] Aliases:map[]}"
	Nov 21 14:37:48 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:48.595116092Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 21 14:37:48 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:48.605002733Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5c9c44b35151618582af8a5ea0cffc00faa0959c0e3d3b71f3eda5bf1bf1391a UID:d07a0f79-8b73-4999-a3a1-654a71184bf3 NetNS:/var/run/netns/ca2cbbf7-8e19-4d7f-abf2-c68cd7bff193 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004c87d0}] Aliases:map[]}"
	Nov 21 14:37:48 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:48.605175179Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 21 14:37:48 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:48.606080895Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 21 14:37:48 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:48.607203203Z" level=info msg="Ran pod sandbox 5c9c44b35151618582af8a5ea0cffc00faa0959c0e3d3b71f3eda5bf1bf1391a with infra container: default/busybox/POD" id=ed50ff39-67c9-4b63-a4d9-03526bbae606 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:37:48 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:48.608335999Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5f886400-3fc6-46df-a7f8-b2ecb3fd29f9 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:37:48 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:48.608438804Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5f886400-3fc6-46df-a7f8-b2ecb3fd29f9 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:37:48 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:48.608469405Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=5f886400-3fc6-46df-a7f8-b2ecb3fd29f9 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:37:48 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:48.609075804Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=172e70b8-0f53-4643-8228-9dad308b8414 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:37:48 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:48.610398518Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:37:49 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:49.372277728Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=172e70b8-0f53-4643-8228-9dad308b8414 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:37:49 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:49.373021991Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b1d9da57-c872-4ee4-ac80-ed4b281455d0 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:37:49 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:49.374442226Z" level=info msg="Creating container: default/busybox/busybox" id=ea2146a0-a3ba-45ae-a23c-6c5bd69cd716 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:37:49 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:49.374716076Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:37:49 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:49.379351695Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:37:49 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:49.379809682Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:37:49 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:49.405735482Z" level=info msg="Created container b614977ce9bb15ae7c3a121f64965f2affaf004ab03e1638db950b3f9393ab66: default/busybox/busybox" id=ea2146a0-a3ba-45ae-a23c-6c5bd69cd716 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:37:49 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:49.406287287Z" level=info msg="Starting container: b614977ce9bb15ae7c3a121f64965f2affaf004ab03e1638db950b3f9393ab66" id=f7ce371d-6741-43c3-9d32-0c93e2c5af7d name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:37:49 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:49.408677053Z" level=info msg="Started container" PID=2221 containerID=b614977ce9bb15ae7c3a121f64965f2affaf004ab03e1638db950b3f9393ab66 description=default/busybox/busybox id=f7ce371d-6741-43c3-9d32-0c93e2c5af7d name=/runtime.v1.RuntimeService/StartContainer sandboxID=5c9c44b35151618582af8a5ea0cffc00faa0959c0e3d3b71f3eda5bf1bf1391a
	Nov 21 14:37:56 old-k8s-version-794941 crio[776]: time="2025-11-21T14:37:56.386897621Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	b614977ce9bb1       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   5c9c44b351516       busybox                                          default
	e9f9f2a3522e7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      11 seconds ago      Running             coredns                   0                   9e6e5cbcb1ce2       coredns-5dd5756b68-h4xjd                         kube-system
	bd88ff7bbe183       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   cf8712c2adca3       storage-provisioner                              kube-system
	02a8407db9aec       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   5435bdb657b5f       kindnet-9pjsf                                    kube-system
	799d42c0de045       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      25 seconds ago      Running             kube-proxy                0                   fd6c88f37f93c       kube-proxy-w4rcg                                 kube-system
	ce13e5f23a685       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      42 seconds ago      Running             etcd                      0                   4adc897a44408       etcd-old-k8s-version-794941                      kube-system
	467bdd3b9169e       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      42 seconds ago      Running             kube-controller-manager   0                   32ba60a1410fe       kube-controller-manager-old-k8s-version-794941   kube-system
	39c1749f44a0f       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      42 seconds ago      Running             kube-scheduler            0                   96a773de2d1ef       kube-scheduler-old-k8s-version-794941            kube-system
	48c9a40e6cdfc       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      42 seconds ago      Running             kube-apiserver            0                   48986528265dc       kube-apiserver-old-k8s-version-794941            kube-system
	
	
	==> coredns [e9f9f2a3522e7998c4018abf1941ba20f17ccfca34063bc934d9481bdee69891] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60051 - 1679 "HINFO IN 4025810076662049030.5065388041594663227. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.138808764s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-794941
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-794941
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=old-k8s-version-794941
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_37_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:37:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-794941
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:37:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:37:50 +0000   Fri, 21 Nov 2025 14:37:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:37:50 +0000   Fri, 21 Nov 2025 14:37:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:37:50 +0000   Fri, 21 Nov 2025 14:37:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:37:50 +0000   Fri, 21 Nov 2025 14:37:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-794941
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                7ff44db4-ba6f-408c-b662-b0a6f3e0bc74
	  Boot ID:                    4deed74e-d1ae-403f-b303-7338c681df31
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-5dd5756b68-h4xjd                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-old-k8s-version-794941                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-9pjsf                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-old-k8s-version-794941             250m (3%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-old-k8s-version-794941    200m (2%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-w4rcg                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-old-k8s-version-794941             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 38s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s   kubelet          Node old-k8s-version-794941 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s   kubelet          Node old-k8s-version-794941 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s   kubelet          Node old-k8s-version-794941 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node old-k8s-version-794941 event: Registered Node old-k8s-version-794941 in Controller
	  Normal  NodeReady                12s   kubelet          Node old-k8s-version-794941 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.087005] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024870] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.407014] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 13:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.052324] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023964] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +2.047715] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +4.031557] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +8.511113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[ +16.382337] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[Nov21 13:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	
	
	==> etcd [ce13e5f23a685160821e6a7577d0a3e9f79873fd46104143af9c0f22a812813b] <==
	{"level":"info","ts":"2025-11-21T14:37:15.032879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-11-21T14:37:15.033012Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-11-21T14:37:15.033855Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-21T14:37:15.033911Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-21T14:37:15.03395Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-21T14:37:15.034056Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-21T14:37:15.034103Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-21T14:37:15.124047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-21T14:37:15.124083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-21T14:37:15.124111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-11-21T14:37:15.124124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-11-21T14:37:15.124131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-21T14:37:15.124138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-11-21T14:37:15.124146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-21T14:37:15.124706Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:37:15.125284Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-794941 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-21T14:37:15.125293Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-21T14:37:15.125498Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-21T14:37:15.125577Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-21T14:37:15.125312Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-21T14:37:15.125605Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:37:15.125684Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:37:15.125713Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:37:15.126766Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-11-21T14:37:15.126989Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 14:37:57 up  1:20,  0 user,  load average: 2.49, 2.36, 1.57
	Linux old-k8s-version-794941 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [02a8407db9aeca30de8f7545e7c8b420af9f75706015c1392275eb454e8552ac] <==
	I1121 14:37:34.745987       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:37:34.746274       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1121 14:37:34.746458       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:37:34.746478       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:37:34.746500       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:37:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:37:34.945953       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:37:34.946641       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:37:34.946679       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:37:35.025472       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:37:35.325494       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:37:35.325527       1 metrics.go:72] Registering metrics
	I1121 14:37:35.325624       1 controller.go:711] "Syncing nftables rules"
	I1121 14:37:44.953653       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1121 14:37:44.953753       1 main.go:301] handling current node
	I1121 14:37:54.947154       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1121 14:37:54.947181       1 main.go:301] handling current node
	
	
	==> kube-apiserver [48c9a40e6cdfc004cd75093ae3c5fd3d2a8647dd2d6e509a60a0fd472567fbce] <==
	I1121 14:37:16.466355       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1121 14:37:16.466368       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1121 14:37:16.466615       1 shared_informer.go:318] Caches are synced for configmaps
	I1121 14:37:16.466948       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1121 14:37:16.467006       1 aggregator.go:166] initial CRD sync complete...
	I1121 14:37:16.467019       1 autoregister_controller.go:141] Starting autoregister controller
	I1121 14:37:16.467025       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:37:16.467035       1 cache.go:39] Caches are synced for autoregister controller
	I1121 14:37:16.467597       1 controller.go:624] quota admission added evaluator for: namespaces
	I1121 14:37:16.661586       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:37:17.364136       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:37:17.367484       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:37:17.367497       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:37:17.734816       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:37:17.765935       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:37:17.876224       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:37:17.880473       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1121 14:37:17.881223       1 controller.go:624] quota admission added evaluator for: endpoints
	I1121 14:37:17.884452       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:37:18.405285       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1121 14:37:19.222945       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1121 14:37:19.232027       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:37:19.240914       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1121 14:37:32.065096       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1121 14:37:32.161169       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [467bdd3b9169e42aaf94c2c00e4ccedf11f45b51c233d4ff7b14510f9f3f68c1] <==
	I1121 14:37:31.312673       1 shared_informer.go:318] Caches are synced for PVC protection
	I1121 14:37:31.330879       1 shared_informer.go:318] Caches are synced for attach detach
	I1121 14:37:31.342132       1 shared_informer.go:318] Caches are synced for deployment
	I1121 14:37:31.353318       1 shared_informer.go:318] Caches are synced for disruption
	I1121 14:37:31.388519       1 shared_informer.go:318] Caches are synced for resource quota
	I1121 14:37:31.462126       1 shared_informer.go:318] Caches are synced for resource quota
	I1121 14:37:31.777630       1 shared_informer.go:318] Caches are synced for garbage collector
	I1121 14:37:31.853915       1 shared_informer.go:318] Caches are synced for garbage collector
	I1121 14:37:31.853948       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1121 14:37:32.074307       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-w4rcg"
	I1121 14:37:32.076938       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-9pjsf"
	I1121 14:37:32.164046       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1121 14:37:32.178474       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1121 14:37:32.264299       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-9blj4"
	I1121 14:37:32.268797       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-h4xjd"
	I1121 14:37:32.284076       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="120.084397ms"
	I1121 14:37:32.290969       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-9blj4"
	I1121 14:37:32.298336       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.216805ms"
	I1121 14:37:32.303441       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.062382ms"
	I1121 14:37:32.303591       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="105.894µs"
	I1121 14:37:45.532447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="173.716µs"
	I1121 14:37:45.554852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="143.215µs"
	I1121 14:37:46.255906       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1121 14:37:46.387695       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.416438ms"
	I1121 14:37:46.387817       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.068µs"
	
	
	==> kube-proxy [799d42c0de045aa7a40e2717b60fb4352c5fe574ceb102fa72b7118e32120c12] <==
	I1121 14:37:32.463066       1 server_others.go:69] "Using iptables proxy"
	I1121 14:37:32.472117       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1121 14:37:32.489925       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:37:32.492113       1 server_others.go:152] "Using iptables Proxier"
	I1121 14:37:32.492136       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1121 14:37:32.492142       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1121 14:37:32.492162       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1121 14:37:32.492337       1 server.go:846] "Version info" version="v1.28.0"
	I1121 14:37:32.492347       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:37:32.492849       1 config.go:315] "Starting node config controller"
	I1121 14:37:32.492883       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1121 14:37:32.492916       1 config.go:97] "Starting endpoint slice config controller"
	I1121 14:37:32.492940       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1121 14:37:32.493038       1 config.go:188] "Starting service config controller"
	I1121 14:37:32.493049       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1121 14:37:32.593275       1 shared_informer.go:318] Caches are synced for node config
	I1121 14:37:32.593309       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1121 14:37:32.593307       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [39c1749f44a0f7c5111d36b5451c5e1b205f7554c07641ca316537f22f0ee834] <==
	W1121 14:37:16.422997       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1121 14:37:16.423024       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1121 14:37:16.423031       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1121 14:37:16.423053       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1121 14:37:16.423082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1121 14:37:16.423104       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1121 14:37:16.423129       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1121 14:37:16.423109       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1121 14:37:16.423236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1121 14:37:16.423259       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1121 14:37:17.299807       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1121 14:37:17.299843       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1121 14:37:17.324330       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1121 14:37:17.324354       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1121 14:37:17.328573       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1121 14:37:17.328599       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1121 14:37:17.461433       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1121 14:37:17.461460       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1121 14:37:17.501866       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1121 14:37:17.501892       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1121 14:37:17.543401       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1121 14:37:17.543428       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1121 14:37:17.587223       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1121 14:37:17.587262       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1121 14:37:18.019825       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 21 14:37:31 old-k8s-version-794941 kubelet[1407]: I1121 14:37:31.214955    1407 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 21 14:37:31 old-k8s-version-794941 kubelet[1407]: I1121 14:37:31.215642    1407 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:37:32 old-k8s-version-794941 kubelet[1407]: I1121 14:37:32.082733    1407 topology_manager.go:215] "Topology Admit Handler" podUID="1ddd5037-510d-4bcf-b7d8-a61b6f2019e2" podNamespace="kube-system" podName="kube-proxy-w4rcg"
	Nov 21 14:37:32 old-k8s-version-794941 kubelet[1407]: I1121 14:37:32.086317    1407 topology_manager.go:215] "Topology Admit Handler" podUID="2c2633f4-bb39-4747-b0c8-c39c76f724cb" podNamespace="kube-system" podName="kindnet-9pjsf"
	Nov 21 14:37:32 old-k8s-version-794941 kubelet[1407]: I1121 14:37:32.164998    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1ddd5037-510d-4bcf-b7d8-a61b6f2019e2-kube-proxy\") pod \"kube-proxy-w4rcg\" (UID: \"1ddd5037-510d-4bcf-b7d8-a61b6f2019e2\") " pod="kube-system/kube-proxy-w4rcg"
	Nov 21 14:37:32 old-k8s-version-794941 kubelet[1407]: I1121 14:37:32.165051    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mvfz\" (UniqueName: \"kubernetes.io/projected/1ddd5037-510d-4bcf-b7d8-a61b6f2019e2-kube-api-access-7mvfz\") pod \"kube-proxy-w4rcg\" (UID: \"1ddd5037-510d-4bcf-b7d8-a61b6f2019e2\") " pod="kube-system/kube-proxy-w4rcg"
	Nov 21 14:37:32 old-k8s-version-794941 kubelet[1407]: I1121 14:37:32.165092    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2c2633f4-bb39-4747-b0c8-c39c76f724cb-cni-cfg\") pod \"kindnet-9pjsf\" (UID: \"2c2633f4-bb39-4747-b0c8-c39c76f724cb\") " pod="kube-system/kindnet-9pjsf"
	Nov 21 14:37:32 old-k8s-version-794941 kubelet[1407]: I1121 14:37:32.165125    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c2633f4-bb39-4747-b0c8-c39c76f724cb-lib-modules\") pod \"kindnet-9pjsf\" (UID: \"2c2633f4-bb39-4747-b0c8-c39c76f724cb\") " pod="kube-system/kindnet-9pjsf"
	Nov 21 14:37:32 old-k8s-version-794941 kubelet[1407]: I1121 14:37:32.165157    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ddd5037-510d-4bcf-b7d8-a61b6f2019e2-xtables-lock\") pod \"kube-proxy-w4rcg\" (UID: \"1ddd5037-510d-4bcf-b7d8-a61b6f2019e2\") " pod="kube-system/kube-proxy-w4rcg"
	Nov 21 14:37:32 old-k8s-version-794941 kubelet[1407]: I1121 14:37:32.165231    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c2633f4-bb39-4747-b0c8-c39c76f724cb-xtables-lock\") pod \"kindnet-9pjsf\" (UID: \"2c2633f4-bb39-4747-b0c8-c39c76f724cb\") " pod="kube-system/kindnet-9pjsf"
	Nov 21 14:37:32 old-k8s-version-794941 kubelet[1407]: I1121 14:37:32.165274    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ddd5037-510d-4bcf-b7d8-a61b6f2019e2-lib-modules\") pod \"kube-proxy-w4rcg\" (UID: \"1ddd5037-510d-4bcf-b7d8-a61b6f2019e2\") " pod="kube-system/kube-proxy-w4rcg"
	Nov 21 14:37:32 old-k8s-version-794941 kubelet[1407]: I1121 14:37:32.165307    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9wvb\" (UniqueName: \"kubernetes.io/projected/2c2633f4-bb39-4747-b0c8-c39c76f724cb-kube-api-access-j9wvb\") pod \"kindnet-9pjsf\" (UID: \"2c2633f4-bb39-4747-b0c8-c39c76f724cb\") " pod="kube-system/kindnet-9pjsf"
	Nov 21 14:37:33 old-k8s-version-794941 kubelet[1407]: I1121 14:37:33.342137    1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-w4rcg" podStartSLOduration=1.342083869 podCreationTimestamp="2025-11-21 14:37:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:37:33.34182881 +0000 UTC m=+14.141849363" watchObservedRunningTime="2025-11-21 14:37:33.342083869 +0000 UTC m=+14.142104421"
	Nov 21 14:37:35 old-k8s-version-794941 kubelet[1407]: I1121 14:37:35.347786    1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-9pjsf" podStartSLOduration=1.289444133 podCreationTimestamp="2025-11-21 14:37:32 +0000 UTC" firstStartedPulling="2025-11-21 14:37:32.395060569 +0000 UTC m=+13.195081112" lastFinishedPulling="2025-11-21 14:37:34.453350256 +0000 UTC m=+15.253370799" observedRunningTime="2025-11-21 14:37:35.34754525 +0000 UTC m=+16.147565804" watchObservedRunningTime="2025-11-21 14:37:35.34773382 +0000 UTC m=+16.147754374"
	Nov 21 14:37:45 old-k8s-version-794941 kubelet[1407]: I1121 14:37:45.323014    1407 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 21 14:37:45 old-k8s-version-794941 kubelet[1407]: I1121 14:37:45.528397    1407 topology_manager.go:215] "Topology Admit Handler" podUID="e6cb0d18-f24f-4347-aa16-705c736303b1" podNamespace="kube-system" podName="storage-provisioner"
	Nov 21 14:37:45 old-k8s-version-794941 kubelet[1407]: I1121 14:37:45.531065    1407 topology_manager.go:215] "Topology Admit Handler" podUID="5c7fd9b1-424a-4401-932f-775af443b1b0" podNamespace="kube-system" podName="coredns-5dd5756b68-h4xjd"
	Nov 21 14:37:45 old-k8s-version-794941 kubelet[1407]: I1121 14:37:45.562039    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c7fd9b1-424a-4401-932f-775af443b1b0-config-volume\") pod \"coredns-5dd5756b68-h4xjd\" (UID: \"5c7fd9b1-424a-4401-932f-775af443b1b0\") " pod="kube-system/coredns-5dd5756b68-h4xjd"
	Nov 21 14:37:45 old-k8s-version-794941 kubelet[1407]: I1121 14:37:45.562174    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws9hg\" (UniqueName: \"kubernetes.io/projected/e6cb0d18-f24f-4347-aa16-705c736303b1-kube-api-access-ws9hg\") pod \"storage-provisioner\" (UID: \"e6cb0d18-f24f-4347-aa16-705c736303b1\") " pod="kube-system/storage-provisioner"
	Nov 21 14:37:45 old-k8s-version-794941 kubelet[1407]: I1121 14:37:45.562230    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e6cb0d18-f24f-4347-aa16-705c736303b1-tmp\") pod \"storage-provisioner\" (UID: \"e6cb0d18-f24f-4347-aa16-705c736303b1\") " pod="kube-system/storage-provisioner"
	Nov 21 14:37:45 old-k8s-version-794941 kubelet[1407]: I1121 14:37:45.562271    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27kp8\" (UniqueName: \"kubernetes.io/projected/5c7fd9b1-424a-4401-932f-775af443b1b0-kube-api-access-27kp8\") pod \"coredns-5dd5756b68-h4xjd\" (UID: \"5c7fd9b1-424a-4401-932f-775af443b1b0\") " pod="kube-system/coredns-5dd5756b68-h4xjd"
	Nov 21 14:37:46 old-k8s-version-794941 kubelet[1407]: I1121 14:37:46.379091    1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-h4xjd" podStartSLOduration=14.379044602 podCreationTimestamp="2025-11-21 14:37:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:37:46.378944587 +0000 UTC m=+27.178965140" watchObservedRunningTime="2025-11-21 14:37:46.379044602 +0000 UTC m=+27.179065150"
	Nov 21 14:37:46 old-k8s-version-794941 kubelet[1407]: I1121 14:37:46.379601    1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.379545041 podCreationTimestamp="2025-11-21 14:37:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:37:46.368582042 +0000 UTC m=+27.168602588" watchObservedRunningTime="2025-11-21 14:37:46.379545041 +0000 UTC m=+27.179565627"
	Nov 21 14:37:48 old-k8s-version-794941 kubelet[1407]: I1121 14:37:48.287667    1407 topology_manager.go:215] "Topology Admit Handler" podUID="d07a0f79-8b73-4999-a3a1-654a71184bf3" podNamespace="default" podName="busybox"
	Nov 21 14:37:48 old-k8s-version-794941 kubelet[1407]: I1121 14:37:48.377008    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgftz\" (UniqueName: \"kubernetes.io/projected/d07a0f79-8b73-4999-a3a1-654a71184bf3-kube-api-access-pgftz\") pod \"busybox\" (UID: \"d07a0f79-8b73-4999-a3a1-654a71184bf3\") " pod="default/busybox"
	
	
	==> storage-provisioner [bd88ff7bbe1831223a503d886c85c4c3578bb035c78b51c5462b9e5310d2433e] <==
	I1121 14:37:45.911409       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:37:45.919651       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:37:45.919744       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1121 14:37:45.929359       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:37:45.929582       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-794941_f1643699-8913-458b-8771-10f52307107b!
	I1121 14:37:45.930189       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"52963864-7dbb-487b-9dd7-9bcf5d76cf34", APIVersion:"v1", ResourceVersion:"390", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-794941_f1643699-8913-458b-8771-10f52307107b became leader
	I1121 14:37:46.030712       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-794941_f1643699-8913-458b-8771-10f52307107b!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-794941 -n old-k8s-version-794941
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-794941 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-589411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-589411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (243.604726ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:38:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-589411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-589411 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-589411 describe deploy/metrics-server -n kube-system: exit status 1 (56.696774ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-589411 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-589411
helpers_test.go:243: (dbg) docker inspect no-preload-589411:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ba122d6d7a1ce2d4fd919d9499f2c4fb5076fc5a90cce1344ec358d2e5bfc45",
	        "Created": "2025-11-21T14:37:40.849517293Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 239569,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:37:40.897258097Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/2ba122d6d7a1ce2d4fd919d9499f2c4fb5076fc5a90cce1344ec358d2e5bfc45/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ba122d6d7a1ce2d4fd919d9499f2c4fb5076fc5a90cce1344ec358d2e5bfc45/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ba122d6d7a1ce2d4fd919d9499f2c4fb5076fc5a90cce1344ec358d2e5bfc45/hosts",
	        "LogPath": "/var/lib/docker/containers/2ba122d6d7a1ce2d4fd919d9499f2c4fb5076fc5a90cce1344ec358d2e5bfc45/2ba122d6d7a1ce2d4fd919d9499f2c4fb5076fc5a90cce1344ec358d2e5bfc45-json.log",
	        "Name": "/no-preload-589411",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-589411:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-589411",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2ba122d6d7a1ce2d4fd919d9499f2c4fb5076fc5a90cce1344ec358d2e5bfc45",
	                "LowerDir": "/var/lib/docker/overlay2/5b439a772b1bafc04ec7400efb1953394a63935256474aa83fdd49a49549b264-init/diff:/var/lib/docker/overlay2/52a35d389e2d282a009a54c52e3f2c3f22a4d7ab5a2644a29d16127d21682576/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b439a772b1bafc04ec7400efb1953394a63935256474aa83fdd49a49549b264/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b439a772b1bafc04ec7400efb1953394a63935256474aa83fdd49a49549b264/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b439a772b1bafc04ec7400efb1953394a63935256474aa83fdd49a49549b264/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-589411",
	                "Source": "/var/lib/docker/volumes/no-preload-589411/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-589411",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-589411",
	                "name.minikube.sigs.k8s.io": "no-preload-589411",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0002f32adc58653aeaca4662283a7502d9bb965fd93d972edb6ec53eed787122",
	            "SandboxKey": "/var/run/docker/netns/0002f32adc58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-589411": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "16216427221de4c7c427a254dcd5d0745c57cde4857ab5c433751b20e1dda883",
	                    "EndpointID": "36a3db80e3eeb6fef02268910053168d5131410da6546d4eb7ad3313cc3c0438",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "36:a4:5c:d4:b1:f5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-589411",
	                        "2ba122d6d7a1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-589411 -n no-preload-589411
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-589411 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-589411 logs -n 25: (1.027212903s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ multinode-384928 ssh -n multinode-384928 sudo cat /home/docker/cp-test_multinode-384928-m03_multinode-384928.txt                                                                                                                              │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ cp      │ multinode-384928 cp multinode-384928-m03:/home/docker/cp-test.txt multinode-384928-m02:/home/docker/cp-test_multinode-384928-m03_multinode-384928-m02.txt                                                                                     │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ multinode-384928 ssh -n multinode-384928-m03 sudo cat /home/docker/cp-test.txt                                                                                                                                                                │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ ssh     │ multinode-384928 ssh -n multinode-384928-m02 sudo cat /home/docker/cp-test_multinode-384928-m03_multinode-384928-m02.txt                                                                                                                      │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ node    │ multinode-384928 node stop m03                                                                                                                                                                                                                │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ node    │ multinode-384928 node start m03 -v=5 --alsologtostderr                                                                                                                                                                                        │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ node    │ list -p multinode-384928                                                                                                                                                                                                                      │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ stop    │ -p multinode-384928                                                                                                                                                                                                                           │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p cert-expiration-046125 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-046125   │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ start   │ -p cert-options-116734 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ delete  │ -p force-systemd-env-653926                                                                                                                                                                                                                   │ force-systemd-env-653926 │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ start   │ -p pause-738756 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                                     │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:37 UTC │
	│ ssh     │ cert-options-116734 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ ssh     │ -p cert-options-116734 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ delete  │ -p cert-options-116734                                                                                                                                                                                                                        │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ start   │ -p old-k8s-version-794941 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:37 UTC │
	│ start   │ -p pause-738756 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:37 UTC │
	│ pause   │ -p pause-738756 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │                     │
	│ delete  │ -p pause-738756                                                                                                                                                                                                                               │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:37 UTC │
	│ start   │ -p no-preload-589411 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589411        │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:38 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-794941 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │                     │
	│ stop    │ -p old-k8s-version-794941 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:38 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-794941 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ start   │ -p old-k8s-version-794941 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-589411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-589411        │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:38:15
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:38:15.124199  245960 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:38:15.124461  245960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:38:15.124471  245960 out.go:374] Setting ErrFile to fd 2...
	I1121 14:38:15.124476  245960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:38:15.124668  245960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:38:15.125037  245960 out.go:368] Setting JSON to false
	I1121 14:38:15.126065  245960 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4844,"bootTime":1763731051,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:38:15.126150  245960 start.go:143] virtualization: kvm guest
	I1121 14:38:15.127848  245960 out.go:179] * [old-k8s-version-794941] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:38:15.129191  245960 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:38:15.129217  245960 notify.go:221] Checking for updates...
	I1121 14:38:15.131227  245960 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:38:15.132388  245960 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:38:15.133326  245960 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 14:38:15.134289  245960 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:38:15.135313  245960 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:38:15.136609  245960 config.go:182] Loaded profile config "old-k8s-version-794941": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1121 14:38:15.138379  245960 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1121 14:38:15.139264  245960 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:38:15.162039  245960 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:38:15.162120  245960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:38:15.220115  245960 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:38:15.211029645 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:38:15.220206  245960 docker.go:319] overlay module found
	I1121 14:38:15.222329  245960 out.go:179] * Using the docker driver based on existing profile
	I1121 14:38:15.223147  245960 start.go:309] selected driver: docker
	I1121 14:38:15.223169  245960 start.go:930] validating driver "docker" against &{Name:old-k8s-version-794941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-794941 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:38:15.223265  245960 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:38:15.223948  245960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:38:15.280636  245960 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:38:15.270229023 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:38:15.281006  245960 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:38:15.281041  245960 cni.go:84] Creating CNI manager for ""
	I1121 14:38:15.281102  245960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:38:15.281150  245960 start.go:353] cluster config:
	{Name:old-k8s-version-794941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-794941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:38:15.283212  245960 out.go:179] * Starting "old-k8s-version-794941" primary control-plane node in "old-k8s-version-794941" cluster
	I1121 14:38:15.284203  245960 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:38:15.285343  245960 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:38:15.286284  245960 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1121 14:38:15.286321  245960 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1121 14:38:15.286333  245960 cache.go:65] Caching tarball of preloaded images
	I1121 14:38:15.286406  245960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:38:15.286450  245960 preload.go:238] Found /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1121 14:38:15.286461  245960 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1121 14:38:15.286605  245960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941/config.json ...
	I1121 14:38:15.305544  245960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:38:15.305576  245960 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:38:15.305594  245960 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:38:15.305623  245960 start.go:360] acquireMachinesLock for old-k8s-version-794941: {Name:mk2e2dfb83292250318daddfa5eb4ed04b2ee440 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:38:15.305691  245960 start.go:364] duration metric: took 48.174µs to acquireMachinesLock for "old-k8s-version-794941"
	I1121 14:38:15.305708  245960 start.go:96] Skipping create...Using existing machine configuration
	I1121 14:38:15.305712  245960 fix.go:54] fixHost starting: 
	I1121 14:38:15.305907  245960 cli_runner.go:164] Run: docker container inspect old-k8s-version-794941 --format={{.State.Status}}
	I1121 14:38:15.321705  245960 fix.go:112] recreateIfNeeded on old-k8s-version-794941: state=Stopped err=<nil>
	W1121 14:38:15.321729  245960 fix.go:138] unexpected machine state, will restart: <nil>
	I1121 14:38:13.776105  202147 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:38:13.776516  202147 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:38:13.776588  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:38:13.776645  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:38:13.804873  202147 cri.go:89] found id: "92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8"
	I1121 14:38:13.804894  202147 cri.go:89] found id: ""
	I1121 14:38:13.804905  202147 logs.go:282] 1 containers: [92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8]
	I1121 14:38:13.804951  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:38:13.808807  202147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:38:13.808862  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:38:13.835990  202147 cri.go:89] found id: ""
	I1121 14:38:13.836017  202147 logs.go:282] 0 containers: []
	W1121 14:38:13.836027  202147 logs.go:284] No container was found matching "etcd"
	I1121 14:38:13.836034  202147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:38:13.836081  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:38:13.860422  202147 cri.go:89] found id: ""
	I1121 14:38:13.860446  202147 logs.go:282] 0 containers: []
	W1121 14:38:13.860459  202147 logs.go:284] No container was found matching "coredns"
	I1121 14:38:13.860466  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:38:13.860513  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:38:13.885485  202147 cri.go:89] found id: "3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:38:13.885508  202147 cri.go:89] found id: ""
	I1121 14:38:13.885519  202147 logs.go:282] 1 containers: [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980]
	I1121 14:38:13.885582  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:38:13.889986  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:38:13.890046  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:38:13.919928  202147 cri.go:89] found id: ""
	I1121 14:38:13.919949  202147 logs.go:282] 0 containers: []
	W1121 14:38:13.919955  202147 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:38:13.919961  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:38:13.920001  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:38:13.945675  202147 cri.go:89] found id: "aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0"
	I1121 14:38:13.945692  202147 cri.go:89] found id: ""
	I1121 14:38:13.945699  202147 logs.go:282] 1 containers: [aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0]
	I1121 14:38:13.945737  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:38:13.949224  202147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:38:13.949280  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:38:13.973784  202147 cri.go:89] found id: ""
	I1121 14:38:13.973807  202147 logs.go:282] 0 containers: []
	W1121 14:38:13.973820  202147 logs.go:284] No container was found matching "kindnet"
	I1121 14:38:13.973830  202147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:38:13.973876  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:38:13.999311  202147 cri.go:89] found id: ""
	I1121 14:38:13.999329  202147 logs.go:282] 0 containers: []
	W1121 14:38:13.999336  202147 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:38:13.999347  202147 logs.go:123] Gathering logs for kube-controller-manager [aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0] ...
	I1121 14:38:13.999359  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0"
	I1121 14:38:14.025399  202147 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:38:14.025426  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:38:14.079025  202147 logs.go:123] Gathering logs for container status ...
	I1121 14:38:14.079051  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:38:14.113779  202147 logs.go:123] Gathering logs for kubelet ...
	I1121 14:38:14.113808  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:38:14.230493  202147 logs.go:123] Gathering logs for dmesg ...
	I1121 14:38:14.230530  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:38:14.248538  202147 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:38:14.248592  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:38:14.319372  202147 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:38:14.319397  202147 logs.go:123] Gathering logs for kube-apiserver [92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8] ...
	I1121 14:38:14.319411  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8"
	I1121 14:38:14.363530  202147 logs.go:123] Gathering logs for kube-scheduler [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980] ...
	I1121 14:38:14.363576  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:38:16.931770  202147 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:38:16.932114  202147 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:38:16.932165  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:38:16.932208  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:38:16.959075  202147 cri.go:89] found id: "92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8"
	I1121 14:38:16.959092  202147 cri.go:89] found id: ""
	I1121 14:38:16.959099  202147 logs.go:282] 1 containers: [92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8]
	I1121 14:38:16.959141  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:38:16.962851  202147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:38:16.962910  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:38:16.986923  202147 cri.go:89] found id: ""
	I1121 14:38:16.986945  202147 logs.go:282] 0 containers: []
	W1121 14:38:16.986955  202147 logs.go:284] No container was found matching "etcd"
	I1121 14:38:16.986962  202147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:38:16.987010  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:38:17.013152  202147 cri.go:89] found id: ""
	I1121 14:38:17.013172  202147 logs.go:282] 0 containers: []
	W1121 14:38:17.013180  202147 logs.go:284] No container was found matching "coredns"
	I1121 14:38:17.013185  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:38:17.013235  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:38:17.037150  202147 cri.go:89] found id: "3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:38:17.037176  202147 cri.go:89] found id: ""
	I1121 14:38:17.037185  202147 logs.go:282] 1 containers: [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980]
	I1121 14:38:17.037236  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:38:17.040873  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:38:17.040918  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:38:17.064723  202147 cri.go:89] found id: ""
	I1121 14:38:17.064750  202147 logs.go:282] 0 containers: []
	W1121 14:38:17.064759  202147 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:38:17.064766  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:38:17.064802  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:38:17.089722  202147 cri.go:89] found id: "aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0"
	I1121 14:38:17.089743  202147 cri.go:89] found id: ""
	I1121 14:38:17.089753  202147 logs.go:282] 1 containers: [aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0]
	I1121 14:38:17.089796  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:38:17.093291  202147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:38:17.093338  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:38:17.117410  202147 cri.go:89] found id: ""
	I1121 14:38:17.117427  202147 logs.go:282] 0 containers: []
	W1121 14:38:17.117434  202147 logs.go:284] No container was found matching "kindnet"
	I1121 14:38:17.117441  202147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:38:17.117492  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:38:17.141258  202147 cri.go:89] found id: ""
	I1121 14:38:17.141279  202147 logs.go:282] 0 containers: []
	W1121 14:38:17.141287  202147 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:38:17.141298  202147 logs.go:123] Gathering logs for container status ...
	I1121 14:38:17.141307  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:38:17.168264  202147 logs.go:123] Gathering logs for kubelet ...
	I1121 14:38:17.168285  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:38:17.254533  202147 logs.go:123] Gathering logs for dmesg ...
	I1121 14:38:17.254554  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:38:17.267373  202147 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:38:17.267392  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:38:17.320968  202147 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:38:17.320986  202147 logs.go:123] Gathering logs for kube-apiserver [92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8] ...
	I1121 14:38:17.321001  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8"
	I1121 14:38:17.351162  202147 logs.go:123] Gathering logs for kube-scheduler [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980] ...
	I1121 14:38:17.351183  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:38:17.399993  202147 logs.go:123] Gathering logs for kube-controller-manager [aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0] ...
	I1121 14:38:17.400016  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0"
	I1121 14:38:17.424690  202147 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:38:17.424729  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1121 14:38:16.779696  239114 node_ready.go:57] node "no-preload-589411" has "Ready":"False" status (will retry)
	W1121 14:38:19.280324  239114 node_ready.go:57] node "no-preload-589411" has "Ready":"False" status (will retry)
	I1121 14:38:15.323067  245960 out.go:252] * Restarting existing docker container for "old-k8s-version-794941" ...
	I1121 14:38:15.323111  245960 cli_runner.go:164] Run: docker start old-k8s-version-794941
	I1121 14:38:15.593870  245960 cli_runner.go:164] Run: docker container inspect old-k8s-version-794941 --format={{.State.Status}}
	I1121 14:38:15.613066  245960 kic.go:430] container "old-k8s-version-794941" state is running.
	I1121 14:38:15.613518  245960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-794941
	I1121 14:38:15.631224  245960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941/config.json ...
	I1121 14:38:15.631459  245960 machine.go:94] provisionDockerMachine start ...
	I1121 14:38:15.631524  245960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-794941
	I1121 14:38:15.648819  245960 main.go:143] libmachine: Using SSH client type: native
	I1121 14:38:15.649054  245960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1121 14:38:15.649066  245960 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:38:15.649618  245960 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57062->127.0.0.1:33064: read: connection reset by peer
	I1121 14:38:18.777895  245960 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-794941
	
	I1121 14:38:18.777918  245960 ubuntu.go:182] provisioning hostname "old-k8s-version-794941"
	I1121 14:38:18.777975  245960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-794941
	I1121 14:38:18.795787  245960 main.go:143] libmachine: Using SSH client type: native
	I1121 14:38:18.795998  245960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1121 14:38:18.796014  245960 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-794941 && echo "old-k8s-version-794941" | sudo tee /etc/hostname
	I1121 14:38:18.931139  245960 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-794941
	
	I1121 14:38:18.931220  245960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-794941
	I1121 14:38:18.949226  245960 main.go:143] libmachine: Using SSH client type: native
	I1121 14:38:18.949438  245960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1121 14:38:18.949470  245960 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-794941' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-794941/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-794941' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:38:19.078593  245960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:38:19.078619  245960 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11045/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11045/.minikube}
	I1121 14:38:19.078644  245960 ubuntu.go:190] setting up certificates
	I1121 14:38:19.078652  245960 provision.go:84] configureAuth start
	I1121 14:38:19.078706  245960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-794941
	I1121 14:38:19.096098  245960 provision.go:143] copyHostCerts
	I1121 14:38:19.096149  245960 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem, removing ...
	I1121 14:38:19.096162  245960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem
	I1121 14:38:19.096228  245960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem (1078 bytes)
	I1121 14:38:19.096324  245960 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem, removing ...
	I1121 14:38:19.096332  245960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem
	I1121 14:38:19.096358  245960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem (1123 bytes)
	I1121 14:38:19.096478  245960 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem, removing ...
	I1121 14:38:19.096488  245960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem
	I1121 14:38:19.096517  245960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem (1679 bytes)
	I1121 14:38:19.096604  245960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-794941 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-794941]
	I1121 14:38:19.380415  245960 provision.go:177] copyRemoteCerts
	I1121 14:38:19.380478  245960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:38:19.380510  245960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-794941
	I1121 14:38:19.398140  245960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/old-k8s-version-794941/id_rsa Username:docker}
	I1121 14:38:19.491243  245960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:38:19.507660  245960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1121 14:38:19.524208  245960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:38:19.540351  245960 provision.go:87] duration metric: took 461.687885ms to configureAuth
	I1121 14:38:19.540374  245960 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:38:19.540525  245960 config.go:182] Loaded profile config "old-k8s-version-794941": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1121 14:38:19.540655  245960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-794941
	I1121 14:38:19.557314  245960 main.go:143] libmachine: Using SSH client type: native
	I1121 14:38:19.557546  245960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1121 14:38:19.557585  245960 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:38:19.853508  245960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:38:19.853532  245960 machine.go:97] duration metric: took 4.222056889s to provisionDockerMachine
	I1121 14:38:19.853544  245960 start.go:293] postStartSetup for "old-k8s-version-794941" (driver="docker")
	I1121 14:38:19.853575  245960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:38:19.853655  245960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:38:19.853704  245960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-794941
	I1121 14:38:19.871603  245960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/old-k8s-version-794941/id_rsa Username:docker}
	I1121 14:38:19.964195  245960 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:38:19.967415  245960 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:38:19.967435  245960 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:38:19.967444  245960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/addons for local assets ...
	I1121 14:38:19.967487  245960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/files for local assets ...
	I1121 14:38:19.967578  245960 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem -> 145422.pem in /etc/ssl/certs
	I1121 14:38:19.967686  245960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:38:19.974883  245960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:38:19.991571  245960 start.go:296] duration metric: took 138.001933ms for postStartSetup
	I1121 14:38:19.991638  245960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:38:19.991681  245960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-794941
	I1121 14:38:20.011302  245960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/old-k8s-version-794941/id_rsa Username:docker}
	I1121 14:38:20.104679  245960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:38:20.109567  245960 fix.go:56] duration metric: took 4.803837308s for fixHost
	I1121 14:38:20.109589  245960 start.go:83] releasing machines lock for "old-k8s-version-794941", held for 4.803885709s
	I1121 14:38:20.109650  245960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-794941
	I1121 14:38:20.127384  245960 ssh_runner.go:195] Run: cat /version.json
	I1121 14:38:20.127442  245960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-794941
	I1121 14:38:20.127493  245960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:38:20.127555  245960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-794941
	I1121 14:38:20.147100  245960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/old-k8s-version-794941/id_rsa Username:docker}
	I1121 14:38:20.147503  245960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/old-k8s-version-794941/id_rsa Username:docker}
	I1121 14:38:20.320062  245960 ssh_runner.go:195] Run: systemctl --version
	I1121 14:38:20.326617  245960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:38:20.360355  245960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:38:20.365244  245960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:38:20.365298  245960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:38:20.373575  245960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 14:38:20.373597  245960 start.go:496] detecting cgroup driver to use...
	I1121 14:38:20.373633  245960 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:38:20.373677  245960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:38:20.387385  245960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:38:20.399852  245960 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:38:20.399912  245960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:38:20.413891  245960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:38:20.426238  245960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:38:20.508940  245960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:38:20.593631  245960 docker.go:234] disabling docker service ...
	I1121 14:38:20.593722  245960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:38:20.606755  245960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:38:20.617801  245960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:38:20.692082  245960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:38:20.770252  245960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:38:20.781805  245960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:38:20.794812  245960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1121 14:38:20.794855  245960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:38:20.802718  245960 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1121 14:38:20.802769  245960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:38:20.810969  245960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:38:20.818924  245960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:38:20.826738  245960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:38:20.834041  245960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:38:20.842181  245960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:38:20.849764  245960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:38:20.857829  245960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:38:20.864371  245960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:38:20.870942  245960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:38:20.945174  245960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:38:21.079375  245960 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:38:21.079446  245960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:38:21.083490  245960 start.go:564] Will wait 60s for crictl version
	I1121 14:38:21.083548  245960 ssh_runner.go:195] Run: which crictl
	I1121 14:38:21.086907  245960 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:38:21.110644  245960 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:38:21.110719  245960 ssh_runner.go:195] Run: crio --version
	I1121 14:38:21.136311  245960 ssh_runner.go:195] Run: crio --version
	I1121 14:38:21.163941  245960 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1121 14:38:21.164836  245960 cli_runner.go:164] Run: docker network inspect old-k8s-version-794941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:38:21.183192  245960 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1121 14:38:21.187011  245960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:38:21.196928  245960 kubeadm.go:884] updating cluster {Name:old-k8s-version-794941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-794941 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:38:21.197051  245960 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1121 14:38:21.197095  245960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:38:21.225865  245960 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:38:21.225883  245960 crio.go:433] Images already preloaded, skipping extraction
	I1121 14:38:21.225927  245960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:38:21.249471  245960 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:38:21.249487  245960 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:38:21.249494  245960 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 crio true true} ...
	I1121 14:38:21.249604  245960 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-794941 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-794941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:38:21.249661  245960 ssh_runner.go:195] Run: crio config
	I1121 14:38:21.295028  245960 cni.go:84] Creating CNI manager for ""
	I1121 14:38:21.295047  245960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:38:21.295063  245960 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:38:21.295081  245960 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-794941 NodeName:old-k8s-version-794941 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:38:21.295198  245960 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-794941"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:38:21.295245  245960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1121 14:38:21.302972  245960 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:38:21.303013  245960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:38:21.310079  245960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1121 14:38:21.321821  245960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:38:21.333578  245960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1121 14:38:21.345681  245960 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:38:21.348844  245960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:38:21.358087  245960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:38:21.438394  245960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:38:21.462495  245960 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941 for IP: 192.168.94.2
	I1121 14:38:21.462514  245960 certs.go:195] generating shared ca certs ...
	I1121 14:38:21.462528  245960 certs.go:227] acquiring lock for ca certs: {Name:mkde3a7d6f17b238f06eab3a140993599f1b4367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:38:21.462684  245960 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key
	I1121 14:38:21.462737  245960 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key
	I1121 14:38:21.462749  245960 certs.go:257] generating profile certs ...
	I1121 14:38:21.462852  245960 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941/client.key
	I1121 14:38:21.462903  245960 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941/apiserver.key.8024e028
	I1121 14:38:21.462938  245960 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941/proxy-client.key
	I1121 14:38:21.463066  245960 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem (1338 bytes)
	W1121 14:38:21.463117  245960 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542_empty.pem, impossibly tiny 0 bytes
	I1121 14:38:21.463129  245960 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:38:21.463159  245960 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:38:21.463192  245960 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:38:21.463221  245960 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem (1679 bytes)
	I1121 14:38:21.463283  245960 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:38:21.464097  245960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:38:21.481511  245960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:38:21.498951  245960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:38:21.516001  245960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1121 14:38:21.536982  245960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1121 14:38:21.556079  245960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:38:21.572660  245960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:38:21.589650  245960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:38:21.605597  245960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /usr/share/ca-certificates/145422.pem (1708 bytes)
	I1121 14:38:21.621517  245960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:38:21.637448  245960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem --> /usr/share/ca-certificates/14542.pem (1338 bytes)
	I1121 14:38:21.654191  245960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:38:21.665586  245960 ssh_runner.go:195] Run: openssl version
	I1121 14:38:21.671431  245960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145422.pem && ln -fs /usr/share/ca-certificates/145422.pem /etc/ssl/certs/145422.pem"
	I1121 14:38:21.679291  245960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145422.pem
	I1121 14:38:21.682710  245960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145422.pem
	I1121 14:38:21.682752  245960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145422.pem
	I1121 14:38:21.716530  245960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145422.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:38:21.723452  245960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:38:21.731080  245960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:38:21.734335  245960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:38:21.734380  245960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:38:21.768928  245960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:38:21.776411  245960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14542.pem && ln -fs /usr/share/ca-certificates/14542.pem /etc/ssl/certs/14542.pem"
	I1121 14:38:21.784505  245960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14542.pem
	I1121 14:38:21.787829  245960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14542.pem
	I1121 14:38:21.787871  245960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14542.pem
	I1121 14:38:21.821697  245960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14542.pem /etc/ssl/certs/51391683.0"
	I1121 14:38:21.828684  245960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:38:21.832235  245960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 14:38:21.865319  245960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 14:38:21.898397  245960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 14:38:21.931199  245960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 14:38:21.975615  245960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 14:38:22.019614  245960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 14:38:22.065808  245960 kubeadm.go:401] StartCluster: {Name:old-k8s-version-794941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-794941 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:38:22.065911  245960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:38:22.065968  245960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:38:22.103196  245960 cri.go:89] found id: "fed082a62a98e6cf91511c92c50665b154e1cc4e2eb218f55bee3856ca0f4a01"
	I1121 14:38:22.103218  245960 cri.go:89] found id: "17a449a6e88003887696d16299028555a4ecfd3f1608112d0eea538b806b73b5"
	I1121 14:38:22.103224  245960 cri.go:89] found id: "47db37c9bce4dd61dd2ba64e670f76cd77726767ef4c3182af1d5a94ede3419e"
	I1121 14:38:22.103229  245960 cri.go:89] found id: "28544b475c8231677d3576e4a5811ded27f16bd71f8c79e10c8f00528a254273"
	I1121 14:38:22.103233  245960 cri.go:89] found id: ""
	I1121 14:38:22.103279  245960 ssh_runner.go:195] Run: sudo runc list -f json
	W1121 14:38:22.117523  245960 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:38:22Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:38:22.117616  245960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:38:22.126372  245960 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 14:38:22.126392  245960 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 14:38:22.126435  245960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 14:38:22.136517  245960 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:38:22.137531  245960 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-794941" does not appear in /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:38:22.138156  245960 kubeconfig.go:62] /home/jenkins/minikube-integration/21847-11045/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-794941" cluster setting kubeconfig missing "old-k8s-version-794941" context setting]
	I1121 14:38:22.139159  245960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:38:22.141205  245960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 14:38:22.149107  245960 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1121 14:38:22.149136  245960 kubeadm.go:602] duration metric: took 22.738449ms to restartPrimaryControlPlane
	I1121 14:38:22.149146  245960 kubeadm.go:403] duration metric: took 83.347663ms to StartCluster
	I1121 14:38:22.149161  245960 settings.go:142] acquiring lock: {Name:mkb207cf001a407898b2dbfd9fb9b3881f173a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:38:22.149212  245960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:38:22.150858  245960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:38:22.151107  245960 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:38:22.151176  245960 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:38:22.151282  245960 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-794941"
	I1121 14:38:22.151298  245960 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-794941"
	W1121 14:38:22.151335  245960 addons.go:248] addon storage-provisioner should already be in state true
	I1121 14:38:22.151334  245960 addons.go:70] Setting dashboard=true in profile "old-k8s-version-794941"
	I1121 14:38:22.151342  245960 config.go:182] Loaded profile config "old-k8s-version-794941": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1121 14:38:22.151362  245960 addons.go:239] Setting addon dashboard=true in "old-k8s-version-794941"
	W1121 14:38:22.151369  245960 addons.go:248] addon dashboard should already be in state true
	I1121 14:38:22.151368  245960 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-794941"
	I1121 14:38:22.151392  245960 host.go:66] Checking if "old-k8s-version-794941" exists ...
	I1121 14:38:22.151395  245960 host.go:66] Checking if "old-k8s-version-794941" exists ...
	I1121 14:38:22.151399  245960 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-794941"
	I1121 14:38:22.151723  245960 cli_runner.go:164] Run: docker container inspect old-k8s-version-794941 --format={{.State.Status}}
	I1121 14:38:22.151847  245960 cli_runner.go:164] Run: docker container inspect old-k8s-version-794941 --format={{.State.Status}}
	I1121 14:38:22.151916  245960 cli_runner.go:164] Run: docker container inspect old-k8s-version-794941 --format={{.State.Status}}
	I1121 14:38:22.155750  245960 out.go:179] * Verifying Kubernetes components...
	I1121 14:38:22.156951  245960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:38:22.176455  245960 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-794941"
	W1121 14:38:22.176471  245960 addons.go:248] addon default-storageclass should already be in state true
	I1121 14:38:22.176493  245960 host.go:66] Checking if "old-k8s-version-794941" exists ...
	I1121 14:38:22.176940  245960 cli_runner.go:164] Run: docker container inspect old-k8s-version-794941 --format={{.State.Status}}
	I1121 14:38:22.178724  245960 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1121 14:38:22.179424  245960 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:38:22.180798  245960 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1121 14:38:19.971307  202147 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:38:19.971679  202147 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:38:19.971736  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:38:19.971797  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:38:20.000027  202147 cri.go:89] found id: "92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8"
	I1121 14:38:20.000089  202147 cri.go:89] found id: ""
	I1121 14:38:20.000105  202147 logs.go:282] 1 containers: [92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8]
	I1121 14:38:20.000156  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:38:20.003881  202147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:38:20.003949  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:38:20.029288  202147 cri.go:89] found id: ""
	I1121 14:38:20.029309  202147 logs.go:282] 0 containers: []
	W1121 14:38:20.029319  202147 logs.go:284] No container was found matching "etcd"
	I1121 14:38:20.029325  202147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:38:20.029377  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:38:20.055449  202147 cri.go:89] found id: ""
	I1121 14:38:20.055477  202147 logs.go:282] 0 containers: []
	W1121 14:38:20.055487  202147 logs.go:284] No container was found matching "coredns"
	I1121 14:38:20.055495  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:38:20.055547  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:38:20.079696  202147 cri.go:89] found id: "3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:38:20.079711  202147 cri.go:89] found id: ""
	I1121 14:38:20.079718  202147 logs.go:282] 1 containers: [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980]
	I1121 14:38:20.079754  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:38:20.083268  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:38:20.083323  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:38:20.108086  202147 cri.go:89] found id: ""
	I1121 14:38:20.108106  202147 logs.go:282] 0 containers: []
	W1121 14:38:20.108115  202147 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:38:20.108122  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:38:20.108173  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:38:20.135538  202147 cri.go:89] found id: "aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0"
	I1121 14:38:20.135573  202147 cri.go:89] found id: ""
	I1121 14:38:20.135583  202147 logs.go:282] 1 containers: [aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0]
	I1121 14:38:20.135634  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:38:20.140049  202147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:38:20.140106  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:38:20.169084  202147 cri.go:89] found id: ""
	I1121 14:38:20.169108  202147 logs.go:282] 0 containers: []
	W1121 14:38:20.169118  202147 logs.go:284] No container was found matching "kindnet"
	I1121 14:38:20.169126  202147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:38:20.169176  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:38:20.194874  202147 cri.go:89] found id: ""
	I1121 14:38:20.194896  202147 logs.go:282] 0 containers: []
	W1121 14:38:20.194908  202147 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:38:20.194918  202147 logs.go:123] Gathering logs for kube-controller-manager [aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0] ...
	I1121 14:38:20.194932  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0"
	I1121 14:38:20.219012  202147 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:38:20.219033  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:38:20.263829  202147 logs.go:123] Gathering logs for container status ...
	I1121 14:38:20.263852  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:38:20.292373  202147 logs.go:123] Gathering logs for kubelet ...
	I1121 14:38:20.292396  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:38:20.386981  202147 logs.go:123] Gathering logs for dmesg ...
	I1121 14:38:20.387019  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:38:20.401248  202147 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:38:20.401271  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:38:20.464262  202147 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:38:20.464281  202147 logs.go:123] Gathering logs for kube-apiserver [92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8] ...
	I1121 14:38:20.464293  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8"
	I1121 14:38:20.494369  202147 logs.go:123] Gathering logs for kube-scheduler [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980] ...
	I1121 14:38:20.494394  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:38:23.052615  202147 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:38:23.053003  202147 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:38:23.053053  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:38:23.053102  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:38:23.081116  202147 cri.go:89] found id: "92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8"
	I1121 14:38:23.081139  202147 cri.go:89] found id: ""
	I1121 14:38:23.081147  202147 logs.go:282] 1 containers: [92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8]
	I1121 14:38:23.081194  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:38:23.084983  202147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:38:23.085046  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:38:23.109882  202147 cri.go:89] found id: ""
	I1121 14:38:23.109900  202147 logs.go:282] 0 containers: []
	W1121 14:38:23.109906  202147 logs.go:284] No container was found matching "etcd"
	I1121 14:38:23.109911  202147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:38:23.109948  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:38:23.134430  202147 cri.go:89] found id: ""
	I1121 14:38:23.134447  202147 logs.go:282] 0 containers: []
	W1121 14:38:23.134453  202147 logs.go:284] No container was found matching "coredns"
	I1121 14:38:23.134458  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:38:23.134499  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:38:23.159512  202147 cri.go:89] found id: "3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:38:23.159532  202147 cri.go:89] found id: ""
	I1121 14:38:23.159541  202147 logs.go:282] 1 containers: [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980]
	I1121 14:38:23.159606  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:38:23.163184  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:38:23.163247  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	W1121 14:38:21.280522  239114 node_ready.go:57] node "no-preload-589411" has "Ready":"False" status (will retry)
	W1121 14:38:23.780621  239114 node_ready.go:57] node "no-preload-589411" has "Ready":"False" status (will retry)
	I1121 14:38:22.180886  245960 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:38:22.180896  245960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:38:22.180944  245960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-794941
	I1121 14:38:22.182014  245960 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1121 14:38:22.182048  245960 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1121 14:38:22.182087  245960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-794941
	I1121 14:38:22.207396  245960 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:38:22.207420  245960 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:38:22.207489  245960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-794941
	I1121 14:38:22.210252  245960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/old-k8s-version-794941/id_rsa Username:docker}
	I1121 14:38:22.222041  245960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/old-k8s-version-794941/id_rsa Username:docker}
	I1121 14:38:22.236267  245960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/old-k8s-version-794941/id_rsa Username:docker}
	I1121 14:38:22.297716  245960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:38:22.310165  245960 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-794941" to be "Ready" ...
	I1121 14:38:22.322900  245960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:38:22.334145  245960 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1121 14:38:22.334165  245960 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1121 14:38:22.347761  245960 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1121 14:38:22.347779  245960 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1121 14:38:22.349457  245960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:38:22.364089  245960 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1121 14:38:22.364107  245960 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1121 14:38:22.379237  245960 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1121 14:38:22.379255  245960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1121 14:38:22.394990  245960 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1121 14:38:22.395040  245960 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1121 14:38:22.411505  245960 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1121 14:38:22.411547  245960 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1121 14:38:22.426521  245960 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1121 14:38:22.426540  245960 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1121 14:38:22.439302  245960 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1121 14:38:22.439320  245960 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1121 14:38:22.451662  245960 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1121 14:38:22.451679  245960 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1121 14:38:22.463119  245960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1121 14:38:24.316513  245960 node_ready.go:49] node "old-k8s-version-794941" is "Ready"
	I1121 14:38:24.316551  245960 node_ready.go:38] duration metric: took 2.006358487s for node "old-k8s-version-794941" to be "Ready" ...
	I1121 14:38:24.316582  245960 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:38:24.316633  245960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:38:24.966895  245960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.643939165s)
	I1121 14:38:24.966896  245960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.617412433s)
	I1121 14:38:25.265503  245960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.802347585s)
	I1121 14:38:25.265609  245960 api_server.go:72] duration metric: took 3.114468877s to wait for apiserver process to appear ...
	I1121 14:38:25.265630  245960 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:38:25.265687  245960 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1121 14:38:25.266793  245960 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-794941 addons enable metrics-server
	
	I1121 14:38:25.267881  245960 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1121 14:38:23.187782  202147 cri.go:89] found id: ""
	I1121 14:38:23.187803  202147 logs.go:282] 0 containers: []
	W1121 14:38:23.187812  202147 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:38:23.187821  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:38:23.187877  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:38:23.212015  202147 cri.go:89] found id: "aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0"
	I1121 14:38:23.212036  202147 cri.go:89] found id: ""
	I1121 14:38:23.212045  202147 logs.go:282] 1 containers: [aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0]
	I1121 14:38:23.212086  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:38:23.215493  202147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:38:23.215539  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:38:23.239153  202147 cri.go:89] found id: ""
	I1121 14:38:23.239179  202147 logs.go:282] 0 containers: []
	W1121 14:38:23.239186  202147 logs.go:284] No container was found matching "kindnet"
	I1121 14:38:23.239192  202147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:38:23.239246  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:38:23.263087  202147 cri.go:89] found id: ""
	I1121 14:38:23.263104  202147 logs.go:282] 0 containers: []
	W1121 14:38:23.263110  202147 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:38:23.263117  202147 logs.go:123] Gathering logs for kube-controller-manager [aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0] ...
	I1121 14:38:23.263129  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0"
	I1121 14:38:23.287994  202147 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:38:23.288018  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:38:23.332151  202147 logs.go:123] Gathering logs for container status ...
	I1121 14:38:23.332176  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:38:23.359221  202147 logs.go:123] Gathering logs for kubelet ...
	I1121 14:38:23.359242  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:38:23.457920  202147 logs.go:123] Gathering logs for dmesg ...
	I1121 14:38:23.457956  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:38:23.473738  202147 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:38:23.473764  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:38:23.535027  202147 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:38:23.535050  202147 logs.go:123] Gathering logs for kube-apiserver [92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8] ...
	I1121 14:38:23.535065  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8"
	I1121 14:38:23.572404  202147 logs.go:123] Gathering logs for kube-scheduler [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980] ...
	I1121 14:38:23.572437  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:38:26.130257  202147 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:38:26.130631  202147 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:38:26.130680  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:38:26.130723  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:38:26.157648  202147 cri.go:89] found id: "92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8"
	I1121 14:38:26.157669  202147 cri.go:89] found id: ""
	I1121 14:38:26.157677  202147 logs.go:282] 1 containers: [92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8]
	I1121 14:38:26.157718  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:38:26.161293  202147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:38:26.161338  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:38:26.186206  202147 cri.go:89] found id: ""
	I1121 14:38:26.186226  202147 logs.go:282] 0 containers: []
	W1121 14:38:26.186234  202147 logs.go:284] No container was found matching "etcd"
	I1121 14:38:26.186241  202147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:38:26.186283  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:38:26.210616  202147 cri.go:89] found id: ""
	I1121 14:38:26.210642  202147 logs.go:282] 0 containers: []
	W1121 14:38:26.210650  202147 logs.go:284] No container was found matching "coredns"
	I1121 14:38:26.210656  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:38:26.210705  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:38:26.235829  202147 cri.go:89] found id: "3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:38:26.235846  202147 cri.go:89] found id: ""
	I1121 14:38:26.235853  202147 logs.go:282] 1 containers: [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980]
	I1121 14:38:26.235891  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:38:26.239355  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:38:26.239430  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:38:26.264721  202147 cri.go:89] found id: ""
	I1121 14:38:26.264739  202147 logs.go:282] 0 containers: []
	W1121 14:38:26.264745  202147 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:38:26.264750  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:38:26.264795  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:38:26.288987  202147 cri.go:89] found id: "aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0"
	I1121 14:38:26.289004  202147 cri.go:89] found id: ""
	I1121 14:38:26.289010  202147 logs.go:282] 1 containers: [aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0]
	I1121 14:38:26.289060  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:38:26.292659  202147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:38:26.292706  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:38:26.318879  202147 cri.go:89] found id: ""
	I1121 14:38:26.318903  202147 logs.go:282] 0 containers: []
	W1121 14:38:26.318915  202147 logs.go:284] No container was found matching "kindnet"
	I1121 14:38:26.318921  202147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:38:26.318968  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:38:26.348498  202147 cri.go:89] found id: ""
	I1121 14:38:26.348527  202147 logs.go:282] 0 containers: []
	W1121 14:38:26.348537  202147 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:38:26.348547  202147 logs.go:123] Gathering logs for dmesg ...
	I1121 14:38:26.348588  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:38:26.364964  202147 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:38:26.364991  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:38:26.420054  202147 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:38:26.420076  202147 logs.go:123] Gathering logs for kube-apiserver [92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8] ...
	I1121 14:38:26.420093  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8"
	I1121 14:38:26.449928  202147 logs.go:123] Gathering logs for kube-scheduler [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980] ...
	I1121 14:38:26.449951  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:38:26.497875  202147 logs.go:123] Gathering logs for kube-controller-manager [aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0] ...
	I1121 14:38:26.497900  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0"
	I1121 14:38:26.523434  202147 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:38:26.523456  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:38:26.573569  202147 logs.go:123] Gathering logs for container status ...
	I1121 14:38:26.573599  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:38:26.604718  202147 logs.go:123] Gathering logs for kubelet ...
	I1121 14:38:26.604749  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:38:25.780485  239114 node_ready.go:49] node "no-preload-589411" is "Ready"
	I1121 14:38:25.780601  239114 node_ready.go:38] duration metric: took 13.503928957s for node "no-preload-589411" to be "Ready" ...
	I1121 14:38:25.780655  239114 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:38:25.780767  239114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:38:25.796890  239114 api_server.go:72] duration metric: took 13.771784417s to wait for apiserver process to appear ...
	I1121 14:38:25.796911  239114 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:38:25.796929  239114 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:38:25.800805  239114 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1121 14:38:25.801683  239114 api_server.go:141] control plane version: v1.34.1
	I1121 14:38:25.801705  239114 api_server.go:131] duration metric: took 4.788155ms to wait for apiserver health ...
	I1121 14:38:25.801713  239114 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:38:25.804769  239114 system_pods.go:59] 8 kube-system pods found
	I1121 14:38:25.804794  239114 system_pods.go:61] "coredns-66bc5c9577-db94z" [20ec3fff-ac51-4616-85e6-b8c2ccae71a0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:38:25.804800  239114 system_pods.go:61] "etcd-no-preload-589411" [49eeedf4-62b4-4dd2-8883-74f3c8c682e2] Running
	I1121 14:38:25.804806  239114 system_pods.go:61] "kindnet-h7k2r" [14686249-fa8b-404b-b56a-3826f8197d8f] Running
	I1121 14:38:25.804809  239114 system_pods.go:61] "kube-apiserver-no-preload-589411" [5afc6c44-5fd5-47f7-a486-eb38c9a8e7cb] Running
	I1121 14:38:25.804814  239114 system_pods.go:61] "kube-controller-manager-no-preload-589411" [1874dff6-9c69-4cea-b03f-0d1667d36bec] Running
	I1121 14:38:25.804816  239114 system_pods.go:61] "kube-proxy-qhp5d" [b9a10bcb-1f11-4f8f-ab41-d14646b53a8b] Running
	I1121 14:38:25.804821  239114 system_pods.go:61] "kube-scheduler-no-preload-589411" [c009e7c1-25d6-4d04-97de-cae316e66ec1] Running
	I1121 14:38:25.804825  239114 system_pods.go:61] "storage-provisioner" [5e36e1b3-cb3b-4dad-bb3d-a72ba97ff991] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:38:25.804831  239114 system_pods.go:74] duration metric: took 3.111816ms to wait for pod list to return data ...
	I1121 14:38:25.804840  239114 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:38:25.807051  239114 default_sa.go:45] found service account: "default"
	I1121 14:38:25.807069  239114 default_sa.go:55] duration metric: took 2.219761ms for default service account to be created ...
	I1121 14:38:25.807077  239114 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:38:25.809770  239114 system_pods.go:86] 8 kube-system pods found
	I1121 14:38:25.809793  239114 system_pods.go:89] "coredns-66bc5c9577-db94z" [20ec3fff-ac51-4616-85e6-b8c2ccae71a0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:38:25.809798  239114 system_pods.go:89] "etcd-no-preload-589411" [49eeedf4-62b4-4dd2-8883-74f3c8c682e2] Running
	I1121 14:38:25.809803  239114 system_pods.go:89] "kindnet-h7k2r" [14686249-fa8b-404b-b56a-3826f8197d8f] Running
	I1121 14:38:25.809807  239114 system_pods.go:89] "kube-apiserver-no-preload-589411" [5afc6c44-5fd5-47f7-a486-eb38c9a8e7cb] Running
	I1121 14:38:25.809811  239114 system_pods.go:89] "kube-controller-manager-no-preload-589411" [1874dff6-9c69-4cea-b03f-0d1667d36bec] Running
	I1121 14:38:25.809816  239114 system_pods.go:89] "kube-proxy-qhp5d" [b9a10bcb-1f11-4f8f-ab41-d14646b53a8b] Running
	I1121 14:38:25.809822  239114 system_pods.go:89] "kube-scheduler-no-preload-589411" [c009e7c1-25d6-4d04-97de-cae316e66ec1] Running
	I1121 14:38:25.809827  239114 system_pods.go:89] "storage-provisioner" [5e36e1b3-cb3b-4dad-bb3d-a72ba97ff991] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:38:25.809843  239114 retry.go:31] will retry after 222.285793ms: missing components: kube-dns
	I1121 14:38:26.036500  239114 system_pods.go:86] 8 kube-system pods found
	I1121 14:38:26.036534  239114 system_pods.go:89] "coredns-66bc5c9577-db94z" [20ec3fff-ac51-4616-85e6-b8c2ccae71a0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:38:26.036541  239114 system_pods.go:89] "etcd-no-preload-589411" [49eeedf4-62b4-4dd2-8883-74f3c8c682e2] Running
	I1121 14:38:26.036549  239114 system_pods.go:89] "kindnet-h7k2r" [14686249-fa8b-404b-b56a-3826f8197d8f] Running
	I1121 14:38:26.036555  239114 system_pods.go:89] "kube-apiserver-no-preload-589411" [5afc6c44-5fd5-47f7-a486-eb38c9a8e7cb] Running
	I1121 14:38:26.036575  239114 system_pods.go:89] "kube-controller-manager-no-preload-589411" [1874dff6-9c69-4cea-b03f-0d1667d36bec] Running
	I1121 14:38:26.036580  239114 system_pods.go:89] "kube-proxy-qhp5d" [b9a10bcb-1f11-4f8f-ab41-d14646b53a8b] Running
	I1121 14:38:26.036586  239114 system_pods.go:89] "kube-scheduler-no-preload-589411" [c009e7c1-25d6-4d04-97de-cae316e66ec1] Running
	I1121 14:38:26.036597  239114 system_pods.go:89] "storage-provisioner" [5e36e1b3-cb3b-4dad-bb3d-a72ba97ff991] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:38:26.036616  239114 retry.go:31] will retry after 314.38475ms: missing components: kube-dns
	I1121 14:38:26.355679  239114 system_pods.go:86] 8 kube-system pods found
	I1121 14:38:26.355728  239114 system_pods.go:89] "coredns-66bc5c9577-db94z" [20ec3fff-ac51-4616-85e6-b8c2ccae71a0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:38:26.355737  239114 system_pods.go:89] "etcd-no-preload-589411" [49eeedf4-62b4-4dd2-8883-74f3c8c682e2] Running
	I1121 14:38:26.355748  239114 system_pods.go:89] "kindnet-h7k2r" [14686249-fa8b-404b-b56a-3826f8197d8f] Running
	I1121 14:38:26.355759  239114 system_pods.go:89] "kube-apiserver-no-preload-589411" [5afc6c44-5fd5-47f7-a486-eb38c9a8e7cb] Running
	I1121 14:38:26.355765  239114 system_pods.go:89] "kube-controller-manager-no-preload-589411" [1874dff6-9c69-4cea-b03f-0d1667d36bec] Running
	I1121 14:38:26.355774  239114 system_pods.go:89] "kube-proxy-qhp5d" [b9a10bcb-1f11-4f8f-ab41-d14646b53a8b] Running
	I1121 14:38:26.355780  239114 system_pods.go:89] "kube-scheduler-no-preload-589411" [c009e7c1-25d6-4d04-97de-cae316e66ec1] Running
	I1121 14:38:26.355792  239114 system_pods.go:89] "storage-provisioner" [5e36e1b3-cb3b-4dad-bb3d-a72ba97ff991] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:38:26.355808  239114 retry.go:31] will retry after 396.559095ms: missing components: kube-dns
	I1121 14:38:26.756393  239114 system_pods.go:86] 8 kube-system pods found
	I1121 14:38:26.756420  239114 system_pods.go:89] "coredns-66bc5c9577-db94z" [20ec3fff-ac51-4616-85e6-b8c2ccae71a0] Running
	I1121 14:38:26.756425  239114 system_pods.go:89] "etcd-no-preload-589411" [49eeedf4-62b4-4dd2-8883-74f3c8c682e2] Running
	I1121 14:38:26.756430  239114 system_pods.go:89] "kindnet-h7k2r" [14686249-fa8b-404b-b56a-3826f8197d8f] Running
	I1121 14:38:26.756433  239114 system_pods.go:89] "kube-apiserver-no-preload-589411" [5afc6c44-5fd5-47f7-a486-eb38c9a8e7cb] Running
	I1121 14:38:26.756437  239114 system_pods.go:89] "kube-controller-manager-no-preload-589411" [1874dff6-9c69-4cea-b03f-0d1667d36bec] Running
	I1121 14:38:26.756440  239114 system_pods.go:89] "kube-proxy-qhp5d" [b9a10bcb-1f11-4f8f-ab41-d14646b53a8b] Running
	I1121 14:38:26.756443  239114 system_pods.go:89] "kube-scheduler-no-preload-589411" [c009e7c1-25d6-4d04-97de-cae316e66ec1] Running
	I1121 14:38:26.756446  239114 system_pods.go:89] "storage-provisioner" [5e36e1b3-cb3b-4dad-bb3d-a72ba97ff991] Running
	I1121 14:38:26.756453  239114 system_pods.go:126] duration metric: took 949.37067ms to wait for k8s-apps to be running ...
	I1121 14:38:26.756466  239114 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:38:26.756508  239114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:38:26.769040  239114 system_svc.go:56] duration metric: took 12.566758ms WaitForService to wait for kubelet
	I1121 14:38:26.769063  239114 kubeadm.go:587] duration metric: took 14.743958011s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:38:26.769078  239114 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:38:26.771626  239114 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:38:26.771653  239114 node_conditions.go:123] node cpu capacity is 8
	I1121 14:38:26.771666  239114 node_conditions.go:105] duration metric: took 2.583071ms to run NodePressure ...
	I1121 14:38:26.771676  239114 start.go:242] waiting for startup goroutines ...
	I1121 14:38:26.771682  239114 start.go:247] waiting for cluster config update ...
	I1121 14:38:26.771693  239114 start.go:256] writing updated cluster config ...
	I1121 14:38:26.771980  239114 ssh_runner.go:195] Run: rm -f paused
	I1121 14:38:26.775604  239114 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:38:26.778431  239114 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-db94z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:38:26.782111  239114 pod_ready.go:94] pod "coredns-66bc5c9577-db94z" is "Ready"
	I1121 14:38:26.782129  239114 pod_ready.go:86] duration metric: took 3.676727ms for pod "coredns-66bc5c9577-db94z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:38:26.783823  239114 pod_ready.go:83] waiting for pod "etcd-no-preload-589411" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:38:26.787360  239114 pod_ready.go:94] pod "etcd-no-preload-589411" is "Ready"
	I1121 14:38:26.787379  239114 pod_ready.go:86] duration metric: took 3.53794ms for pod "etcd-no-preload-589411" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:38:26.789016  239114 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-589411" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:38:26.792242  239114 pod_ready.go:94] pod "kube-apiserver-no-preload-589411" is "Ready"
	I1121 14:38:26.792262  239114 pod_ready.go:86] duration metric: took 3.227942ms for pod "kube-apiserver-no-preload-589411" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:38:26.793844  239114 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-589411" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:38:27.178497  239114 pod_ready.go:94] pod "kube-controller-manager-no-preload-589411" is "Ready"
	I1121 14:38:27.178519  239114 pod_ready.go:86] duration metric: took 384.659869ms for pod "kube-controller-manager-no-preload-589411" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:38:27.379694  239114 pod_ready.go:83] waiting for pod "kube-proxy-qhp5d" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:38:27.779754  239114 pod_ready.go:94] pod "kube-proxy-qhp5d" is "Ready"
	I1121 14:38:27.779782  239114 pod_ready.go:86] duration metric: took 400.063417ms for pod "kube-proxy-qhp5d" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:38:27.979486  239114 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-589411" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:38:28.380460  239114 pod_ready.go:94] pod "kube-scheduler-no-preload-589411" is "Ready"
	I1121 14:38:28.380494  239114 pod_ready.go:86] duration metric: took 400.979305ms for pod "kube-scheduler-no-preload-589411" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:38:28.380508  239114 pod_ready.go:40] duration metric: took 1.604880527s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:38:28.427361  239114 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:38:28.429015  239114 out.go:179] * Done! kubectl is now configured to use "no-preload-589411" cluster and "default" namespace by default
	I1121 14:38:25.268880  245960 addons.go:530] duration metric: took 3.117711635s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1121 14:38:25.270232  245960 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1121 14:38:25.270248  245960 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1121 14:38:25.765833  245960 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1121 14:38:25.769606  245960 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1121 14:38:25.770679  245960 api_server.go:141] control plane version: v1.28.0
	I1121 14:38:25.770702  245960 api_server.go:131] duration metric: took 505.06555ms to wait for apiserver health ...
	I1121 14:38:25.770711  245960 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:38:25.775026  245960 system_pods.go:59] 8 kube-system pods found
	I1121 14:38:25.775071  245960 system_pods.go:61] "coredns-5dd5756b68-h4xjd" [5c7fd9b1-424a-4401-932f-775af443b1b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:38:25.775085  245960 system_pods.go:61] "etcd-old-k8s-version-794941" [cc92a3b3-3d27-4b3a-806b-a99049546a7d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 14:38:25.775094  245960 system_pods.go:61] "kindnet-9pjsf" [2c2633f4-bb39-4747-b0c8-c39c76f724cb] Running
	I1121 14:38:25.775109  245960 system_pods.go:61] "kube-apiserver-old-k8s-version-794941" [71dbc4a7-b490-4c73-9e72-0f7fa6d37fca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 14:38:25.775117  245960 system_pods.go:61] "kube-controller-manager-old-k8s-version-794941" [ea18a1a2-fdd4-49b0-bf69-5fd9e0fe1d14] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 14:38:25.775125  245960 system_pods.go:61] "kube-proxy-w4rcg" [1ddd5037-510d-4bcf-b7d8-a61b6f2019e2] Running
	I1121 14:38:25.775139  245960 system_pods.go:61] "kube-scheduler-old-k8s-version-794941" [cf5d7654-5d04-4181-9499-0797085f748c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 14:38:25.775147  245960 system_pods.go:61] "storage-provisioner" [e6cb0d18-f24f-4347-aa16-705c736303b1] Running
	I1121 14:38:25.775156  245960 system_pods.go:74] duration metric: took 4.437443ms to wait for pod list to return data ...
	I1121 14:38:25.775171  245960 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:38:25.778318  245960 default_sa.go:45] found service account: "default"
	I1121 14:38:25.778337  245960 default_sa.go:55] duration metric: took 3.158769ms for default service account to be created ...
	I1121 14:38:25.778346  245960 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:38:25.786708  245960 system_pods.go:86] 8 kube-system pods found
	I1121 14:38:25.786787  245960 system_pods.go:89] "coredns-5dd5756b68-h4xjd" [5c7fd9b1-424a-4401-932f-775af443b1b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:38:25.786809  245960 system_pods.go:89] "etcd-old-k8s-version-794941" [cc92a3b3-3d27-4b3a-806b-a99049546a7d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 14:38:25.786847  245960 system_pods.go:89] "kindnet-9pjsf" [2c2633f4-bb39-4747-b0c8-c39c76f724cb] Running
	I1121 14:38:25.786857  245960 system_pods.go:89] "kube-apiserver-old-k8s-version-794941" [71dbc4a7-b490-4c73-9e72-0f7fa6d37fca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 14:38:25.786880  245960 system_pods.go:89] "kube-controller-manager-old-k8s-version-794941" [ea18a1a2-fdd4-49b0-bf69-5fd9e0fe1d14] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 14:38:25.786887  245960 system_pods.go:89] "kube-proxy-w4rcg" [1ddd5037-510d-4bcf-b7d8-a61b6f2019e2] Running
	I1121 14:38:25.786894  245960 system_pods.go:89] "kube-scheduler-old-k8s-version-794941" [cf5d7654-5d04-4181-9499-0797085f748c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 14:38:25.786929  245960 system_pods.go:89] "storage-provisioner" [e6cb0d18-f24f-4347-aa16-705c736303b1] Running
	I1121 14:38:25.786948  245960 system_pods.go:126] duration metric: took 8.595399ms to wait for k8s-apps to be running ...
	I1121 14:38:25.786967  245960 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:38:25.787034  245960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:38:25.802359  245960 system_svc.go:56] duration metric: took 15.388744ms WaitForService to wait for kubelet
	I1121 14:38:25.802379  245960 kubeadm.go:587] duration metric: took 3.651243457s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:38:25.802398  245960 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:38:25.804792  245960 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:38:25.804816  245960 node_conditions.go:123] node cpu capacity is 8
	I1121 14:38:25.804831  245960 node_conditions.go:105] duration metric: took 2.426374ms to run NodePressure ...
	I1121 14:38:25.804847  245960 start.go:242] waiting for startup goroutines ...
	I1121 14:38:25.804858  245960 start.go:247] waiting for cluster config update ...
	I1121 14:38:25.804877  245960 start.go:256] writing updated cluster config ...
	I1121 14:38:25.805154  245960 ssh_runner.go:195] Run: rm -f paused
	I1121 14:38:25.809287  245960 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:38:25.812713  245960 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-h4xjd" in "kube-system" namespace to be "Ready" or be gone ...
	W1121 14:38:27.818223  245960 pod_ready.go:104] pod "coredns-5dd5756b68-h4xjd" is not "Ready", error: node "old-k8s-version-794941" hosting pod "coredns-5dd5756b68-h4xjd" is not "Ready" (will retry)
	I1121 14:38:29.202789  202147 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:38:29.203104  202147 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:38:29.203149  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:38:29.203190  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:38:29.229891  202147 cri.go:89] found id: "92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8"
	I1121 14:38:29.229907  202147 cri.go:89] found id: ""
	I1121 14:38:29.229914  202147 logs.go:282] 1 containers: [92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8]
	I1121 14:38:29.229952  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:38:29.233702  202147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:38:29.233763  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:38:29.258357  202147 cri.go:89] found id: ""
	I1121 14:38:29.258380  202147 logs.go:282] 0 containers: []
	W1121 14:38:29.258389  202147 logs.go:284] No container was found matching "etcd"
	I1121 14:38:29.258395  202147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:38:29.258446  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:38:29.283128  202147 cri.go:89] found id: ""
	I1121 14:38:29.283146  202147 logs.go:282] 0 containers: []
	W1121 14:38:29.283155  202147 logs.go:284] No container was found matching "coredns"
	I1121 14:38:29.283162  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:38:29.283219  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:38:29.309115  202147 cri.go:89] found id: "3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:38:29.309134  202147 cri.go:89] found id: ""
	I1121 14:38:29.309142  202147 logs.go:282] 1 containers: [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980]
	I1121 14:38:29.309184  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:38:29.312723  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:38:29.312777  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:38:29.338368  202147 cri.go:89] found id: ""
	I1121 14:38:29.338390  202147 logs.go:282] 0 containers: []
	W1121 14:38:29.338400  202147 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:38:29.338407  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:38:29.338458  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:38:29.362877  202147 cri.go:89] found id: "aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0"
	I1121 14:38:29.362902  202147 cri.go:89] found id: ""
	I1121 14:38:29.362911  202147 logs.go:282] 1 containers: [aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0]
	I1121 14:38:29.362958  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:38:29.366911  202147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:38:29.366964  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:38:29.391186  202147 cri.go:89] found id: ""
	I1121 14:38:29.391202  202147 logs.go:282] 0 containers: []
	W1121 14:38:29.391208  202147 logs.go:284] No container was found matching "kindnet"
	I1121 14:38:29.391213  202147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:38:29.391261  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:38:29.416631  202147 cri.go:89] found id: ""
	I1121 14:38:29.416651  202147 logs.go:282] 0 containers: []
	W1121 14:38:29.416659  202147 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:38:29.416669  202147 logs.go:123] Gathering logs for dmesg ...
	I1121 14:38:29.416682  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:38:29.429538  202147 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:38:29.429564  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:38:29.482992  202147 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:38:29.483017  202147 logs.go:123] Gathering logs for kube-apiserver [92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8] ...
	I1121 14:38:29.483028  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8"
	I1121 14:38:29.519569  202147 logs.go:123] Gathering logs for kube-scheduler [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980] ...
	I1121 14:38:29.519602  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:38:29.581728  202147 logs.go:123] Gathering logs for kube-controller-manager [aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0] ...
	I1121 14:38:29.581761  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0"
	I1121 14:38:29.613342  202147 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:38:29.613366  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:38:29.660734  202147 logs.go:123] Gathering logs for container status ...
	I1121 14:38:29.660762  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:38:29.691888  202147 logs.go:123] Gathering logs for kubelet ...
	I1121 14:38:29.691914  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:38:32.278616  202147 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:38:32.279020  202147 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1121 14:38:32.279077  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1121 14:38:32.279131  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1121 14:38:32.307228  202147 cri.go:89] found id: "92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8"
	I1121 14:38:32.307248  202147 cri.go:89] found id: ""
	I1121 14:38:32.307256  202147 logs.go:282] 1 containers: [92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8]
	I1121 14:38:32.307307  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:38:32.311062  202147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1121 14:38:32.311121  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1121 14:38:32.337280  202147 cri.go:89] found id: ""
	I1121 14:38:32.337300  202147 logs.go:282] 0 containers: []
	W1121 14:38:32.337309  202147 logs.go:284] No container was found matching "etcd"
	I1121 14:38:32.337317  202147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1121 14:38:32.337364  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1121 14:38:32.363936  202147 cri.go:89] found id: ""
	I1121 14:38:32.363957  202147 logs.go:282] 0 containers: []
	W1121 14:38:32.363965  202147 logs.go:284] No container was found matching "coredns"
	I1121 14:38:32.363972  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1121 14:38:32.364023  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1121 14:38:32.388366  202147 cri.go:89] found id: "3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:38:32.388387  202147 cri.go:89] found id: ""
	I1121 14:38:32.388396  202147 logs.go:282] 1 containers: [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980]
	I1121 14:38:32.388439  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:38:32.391927  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1121 14:38:32.391973  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1121 14:38:32.416826  202147 cri.go:89] found id: ""
	I1121 14:38:32.416847  202147 logs.go:282] 0 containers: []
	W1121 14:38:32.416855  202147 logs.go:284] No container was found matching "kube-proxy"
	I1121 14:38:32.416861  202147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1121 14:38:32.416911  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1121 14:38:32.441257  202147 cri.go:89] found id: "aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0"
	I1121 14:38:32.441277  202147 cri.go:89] found id: ""
	I1121 14:38:32.441286  202147 logs.go:282] 1 containers: [aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0]
	I1121 14:38:32.441338  202147 ssh_runner.go:195] Run: which crictl
	I1121 14:38:32.444688  202147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1121 14:38:32.444735  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1121 14:38:32.469120  202147 cri.go:89] found id: ""
	I1121 14:38:32.469140  202147 logs.go:282] 0 containers: []
	W1121 14:38:32.469147  202147 logs.go:284] No container was found matching "kindnet"
	I1121 14:38:32.469154  202147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1121 14:38:32.469197  202147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1121 14:38:32.493545  202147 cri.go:89] found id: ""
	I1121 14:38:32.493575  202147 logs.go:282] 0 containers: []
	W1121 14:38:32.493583  202147 logs.go:284] No container was found matching "storage-provisioner"
	I1121 14:38:32.493593  202147 logs.go:123] Gathering logs for kube-scheduler [3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980] ...
	I1121 14:38:32.493605  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3bd4da1f15413a951253e33382e0a4bd73483c56049cdddf0016b8419c7a6980"
	I1121 14:38:32.541324  202147 logs.go:123] Gathering logs for kube-controller-manager [aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0] ...
	I1121 14:38:32.541346  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aaae7baf6ba24ecee82e815350efe3b0da314eaa8312bd2d2a94e19064eab5e0"
	I1121 14:38:32.566485  202147 logs.go:123] Gathering logs for CRI-O ...
	I1121 14:38:32.566505  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1121 14:38:32.615871  202147 logs.go:123] Gathering logs for container status ...
	I1121 14:38:32.615895  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1121 14:38:32.644266  202147 logs.go:123] Gathering logs for kubelet ...
	I1121 14:38:32.644287  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1121 14:38:32.733640  202147 logs.go:123] Gathering logs for dmesg ...
	I1121 14:38:32.733669  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1121 14:38:32.747182  202147 logs.go:123] Gathering logs for describe nodes ...
	I1121 14:38:32.747203  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1121 14:38:32.799893  202147 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1121 14:38:32.799909  202147 logs.go:123] Gathering logs for kube-apiserver [92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8] ...
	I1121 14:38:32.799924  202147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 92dd58313560064f4dd99d028b7c4fef4c934d4160a92818fdb14fc340af4ba8"
	W1121 14:38:30.318181  245960 pod_ready.go:104] pod "coredns-5dd5756b68-h4xjd" is not "Ready", error: node "old-k8s-version-794941" hosting pod "coredns-5dd5756b68-h4xjd" is not "Ready" (will retry)
	W1121 14:38:32.817804  245960 pod_ready.go:104] pod "coredns-5dd5756b68-h4xjd" is not "Ready", error: node "old-k8s-version-794941" hosting pod "coredns-5dd5756b68-h4xjd" is not "Ready" (will retry)
	
	
	==> CRI-O <==
	Nov 21 14:38:25 no-preload-589411 crio[778]: time="2025-11-21T14:38:25.927360562Z" level=info msg="Starting container: d6f8c70e8e8a9f21a8c9a1737299dfe2fe53a2b8bb59bd78adb1e2ed549c4423" id=74e7cba4-33d1-4622-9a98-0a9d2f49b24c name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:38:25 no-preload-589411 crio[778]: time="2025-11-21T14:38:25.92921087Z" level=info msg="Started container" PID=2909 containerID=d6f8c70e8e8a9f21a8c9a1737299dfe2fe53a2b8bb59bd78adb1e2ed549c4423 description=kube-system/coredns-66bc5c9577-db94z/coredns id=74e7cba4-33d1-4622-9a98-0a9d2f49b24c name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee382372613199970e9f431ab27203020cd0632a4ee150bd6f221257257c16df
	Nov 21 14:38:28 no-preload-589411 crio[778]: time="2025-11-21T14:38:28.866992295Z" level=info msg="Running pod sandbox: default/busybox/POD" id=8babec63-6478-4c11-b035-36128b458222 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:38:28 no-preload-589411 crio[778]: time="2025-11-21T14:38:28.867064024Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:38:28 no-preload-589411 crio[778]: time="2025-11-21T14:38:28.871817437Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3ff5d237906433549958126933e712eb57caf4caba2e9d15e90d995917a10f22 UID:00913493-ebe3-475f-bad9-5f049f9a6389 NetNS:/var/run/netns/c8b3f283-d786-412e-9e3a-aa724b42743c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006fc508}] Aliases:map[]}"
	Nov 21 14:38:28 no-preload-589411 crio[778]: time="2025-11-21T14:38:28.871841748Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 21 14:38:28 no-preload-589411 crio[778]: time="2025-11-21T14:38:28.880939066Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3ff5d237906433549958126933e712eb57caf4caba2e9d15e90d995917a10f22 UID:00913493-ebe3-475f-bad9-5f049f9a6389 NetNS:/var/run/netns/c8b3f283-d786-412e-9e3a-aa724b42743c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006fc508}] Aliases:map[]}"
	Nov 21 14:38:28 no-preload-589411 crio[778]: time="2025-11-21T14:38:28.881040352Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 21 14:38:28 no-preload-589411 crio[778]: time="2025-11-21T14:38:28.881694383Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 21 14:38:28 no-preload-589411 crio[778]: time="2025-11-21T14:38:28.882488265Z" level=info msg="Ran pod sandbox 3ff5d237906433549958126933e712eb57caf4caba2e9d15e90d995917a10f22 with infra container: default/busybox/POD" id=8babec63-6478-4c11-b035-36128b458222 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:38:28 no-preload-589411 crio[778]: time="2025-11-21T14:38:28.883423362Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=06aaaa3f-8ff1-4630-92df-239067f6e738 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:38:28 no-preload-589411 crio[778]: time="2025-11-21T14:38:28.883533136Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=06aaaa3f-8ff1-4630-92df-239067f6e738 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:38:28 no-preload-589411 crio[778]: time="2025-11-21T14:38:28.883597589Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=06aaaa3f-8ff1-4630-92df-239067f6e738 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:38:28 no-preload-589411 crio[778]: time="2025-11-21T14:38:28.884047924Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=63b12c08-2ff1-41c5-b771-8401a5b749a8 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:38:28 no-preload-589411 crio[778]: time="2025-11-21T14:38:28.886740547Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:38:29 no-preload-589411 crio[778]: time="2025-11-21T14:38:29.64120387Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=63b12c08-2ff1-41c5-b771-8401a5b749a8 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:38:29 no-preload-589411 crio[778]: time="2025-11-21T14:38:29.64172361Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=51733906-e160-4618-a4fe-1c819b14cba2 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:38:29 no-preload-589411 crio[778]: time="2025-11-21T14:38:29.642971086Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=db699ec8-6509-4e83-bd09-50162c812ca8 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:38:29 no-preload-589411 crio[778]: time="2025-11-21T14:38:29.64606499Z" level=info msg="Creating container: default/busybox/busybox" id=3fe742af-54e9-48ab-9bad-249518e74d89 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:38:29 no-preload-589411 crio[778]: time="2025-11-21T14:38:29.646200727Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:38:29 no-preload-589411 crio[778]: time="2025-11-21T14:38:29.650173883Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:38:29 no-preload-589411 crio[778]: time="2025-11-21T14:38:29.65072336Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:38:29 no-preload-589411 crio[778]: time="2025-11-21T14:38:29.673881933Z" level=info msg="Created container 87b5555f2d7aeffbb155b4ce6d6af1e6e3618e4bad0ef7171272d86f016aeb13: default/busybox/busybox" id=3fe742af-54e9-48ab-9bad-249518e74d89 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:38:29 no-preload-589411 crio[778]: time="2025-11-21T14:38:29.674423932Z" level=info msg="Starting container: 87b5555f2d7aeffbb155b4ce6d6af1e6e3618e4bad0ef7171272d86f016aeb13" id=2e7c9847-9ae8-4b80-94da-233db18f0fbf name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:38:29 no-preload-589411 crio[778]: time="2025-11-21T14:38:29.676509123Z" level=info msg="Started container" PID=2983 containerID=87b5555f2d7aeffbb155b4ce6d6af1e6e3618e4bad0ef7171272d86f016aeb13 description=default/busybox/busybox id=2e7c9847-9ae8-4b80-94da-233db18f0fbf name=/runtime.v1.RuntimeService/StartContainer sandboxID=3ff5d237906433549958126933e712eb57caf4caba2e9d15e90d995917a10f22
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	87b5555f2d7ae       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   3ff5d23790643       busybox                                     default
	d6f8c70e8e8a9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   ee38237261319       coredns-66bc5c9577-db94z                    kube-system
	366a478e34932       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   0d6453fa2321c       storage-provisioner                         kube-system
	b3eca945f1db6       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   cc641f8a4d3f8       kindnet-h7k2r                               kube-system
	ca3d4ce80a282       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      25 seconds ago      Running             kube-proxy                0                   15864c304ab72       kube-proxy-qhp5d                            kube-system
	df62bd356ad9c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   4043451fff086       kube-scheduler-no-preload-589411            kube-system
	563e31139b371       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   ce261758f147c       kube-apiserver-no-preload-589411            kube-system
	b1a978b42d85e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   ac5526d4be2b8       kube-controller-manager-no-preload-589411   kube-system
	27bc9abdef662       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   741a6f07c365e       etcd-no-preload-589411                      kube-system
	
	
	==> coredns [d6f8c70e8e8a9f21a8c9a1737299dfe2fe53a2b8bb59bd78adb1e2ed549c4423] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39795 - 30624 "HINFO IN 8656932466696887977.782003973265255396. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.496155474s
	
	
	==> describe nodes <==
	Name:               no-preload-589411
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-589411
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=no-preload-589411
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_38_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:38:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-589411
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:38:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:38:37 +0000   Fri, 21 Nov 2025 14:38:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:38:37 +0000   Fri, 21 Nov 2025 14:38:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:38:37 +0000   Fri, 21 Nov 2025 14:38:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:38:37 +0000   Fri, 21 Nov 2025 14:38:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-589411
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                8c0c3626-fd96-4939-aead-166c796faa08
	  Boot ID:                    4deed74e-d1ae-403f-b303-7338c681df31
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-db94z                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-no-preload-589411                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-h7k2r                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-589411             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-589411    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-qhp5d                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-589411             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node no-preload-589411 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node no-preload-589411 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node no-preload-589411 status is now: NodeHasSufficientPID
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s                kubelet          Node no-preload-589411 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s                kubelet          Node no-preload-589411 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s                kubelet          Node no-preload-589411 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node no-preload-589411 event: Registered Node no-preload-589411 in Controller
	  Normal  NodeReady                13s                kubelet          Node no-preload-589411 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.087005] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024870] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.407014] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 13:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.052324] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023964] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +2.047715] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +4.031557] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +8.511113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[ +16.382337] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[Nov21 13:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	
	
	==> etcd [27bc9abdef662526ad83f86cc011257a9cb4a5dda81c71201426ed4e309e0996] <==
	{"level":"warn","ts":"2025-11-21T14:38:03.512673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:38:03.519586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:38:03.527377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:38:03.534709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:38:03.542116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:38:03.549804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:38:03.556038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:38:03.563290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:38:03.571368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:38:03.578290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:38:03.585580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:38:03.591505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:38:03.607785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:38:03.614972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:38:03.621218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:38:03.667601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:38:06.321699Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.442034ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/resource-claim-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-21T14:38:06.321746Z","caller":"traceutil/trace.go:172","msg":"trace[1773639567] transaction","detail":"{read_only:false; response_revision:267; number_of_response:1; }","duration":"154.093591ms","start":"2025-11-21T14:38:06.167635Z","end":"2025-11-21T14:38:06.321728Z","steps":["trace[1773639567] 'process raft request'  (duration: 91.028622ms)","trace[1773639567] 'compare'  (duration: 62.989101ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-21T14:38:06.321787Z","caller":"traceutil/trace.go:172","msg":"trace[905635446] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/resource-claim-controller; range_end:; response_count:0; response_revision:266; }","duration":"123.548163ms","start":"2025-11-21T14:38:06.198224Z","end":"2025-11-21T14:38:06.321772Z","steps":["trace[905635446] 'agreement among raft nodes before linearized reading'  (duration: 60.449009ms)","trace[905635446] 'range keys from in-memory index tree'  (duration: 62.962521ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-21T14:38:11.411332Z","caller":"traceutil/trace.go:172","msg":"trace[1117725538] linearizableReadLoop","detail":"{readStateIndex:345; appliedIndex:345; }","duration":"123.488873ms","start":"2025-11-21T14:38:11.287822Z","end":"2025-11-21T14:38:11.411311Z","steps":["trace[1117725538] 'read index received'  (duration: 123.477163ms)","trace[1117725538] 'applied index is now lower than readState.Index'  (duration: 9.437µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T14:38:11.452244Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"164.408361ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" limit:1 ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-11-21T14:38:11.452298Z","caller":"traceutil/trace.go:172","msg":"trace[412046225] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:335; }","duration":"164.469528ms","start":"2025-11-21T14:38:11.287813Z","end":"2025-11-21T14:38:11.452283Z","steps":["trace[412046225] 'agreement among raft nodes before linearized reading'  (duration: 123.570188ms)","trace[412046225] 'range keys from in-memory index tree'  (duration: 40.749531ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-21T14:38:11.452420Z","caller":"traceutil/trace.go:172","msg":"trace[1110080241] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"182.303027ms","start":"2025-11-21T14:38:11.270101Z","end":"2025-11-21T14:38:11.452404Z","steps":["trace[1110080241] 'process raft request'  (duration: 141.228641ms)","trace[1110080241] 'compare'  (duration: 40.90435ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T14:38:11.452479Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.981468ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-21T14:38:11.452529Z","caller":"traceutil/trace.go:172","msg":"trace[2048520630] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/daemon-set-controller; range_end:; response_count:1; response_revision:336; }","duration":"115.046848ms","start":"2025-11-21T14:38:11.337469Z","end":"2025-11-21T14:38:11.452516Z","steps":["trace[2048520630] 'agreement among raft nodes before linearized reading'  (duration: 114.911543ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:38:38 up  1:21,  0 user,  load average: 2.33, 2.32, 1.58
	Linux no-preload-589411 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b3eca945f1db6adc656f7743de73509af78c3b39f3334f94df1463c45ba11698] <==
	I1121 14:38:15.276846       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:38:15.277130       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 14:38:15.277281       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:38:15.277322       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:38:15.277350       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:38:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:38:15.479873       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:38:15.479907       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:38:15.479922       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:38:15.480893       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:38:15.880834       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:38:15.880862       1 metrics.go:72] Registering metrics
	I1121 14:38:15.880924       1 controller.go:711] "Syncing nftables rules"
	I1121 14:38:25.485640       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:38:25.485697       1 main.go:301] handling current node
	I1121 14:38:35.483800       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:38:35.483846       1 main.go:301] handling current node
	
	
	==> kube-apiserver [563e31139b371834c770fbf5e907089991d449e4e09faa05cd2a53fc22ad1a1e] <==
	I1121 14:38:04.180705       1 autoregister_controller.go:144] Starting autoregister controller
	I1121 14:38:04.180717       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:38:04.180730       1 cache.go:39] Caches are synced for autoregister controller
	I1121 14:38:04.180838       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 14:38:04.181471       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:38:04.182768       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:38:04.378387       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:38:05.078874       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:38:05.082693       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:38:05.082710       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:38:05.498262       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:38:05.530975       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:38:05.580188       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:38:05.584821       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1121 14:38:05.585605       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:38:05.588995       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:38:06.090598       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:38:06.575776       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:38:06.584776       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:38:06.591089       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:38:11.457758       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:38:11.942312       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1121 14:38:12.099040       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:38:12.102998       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1121 14:38:36.656448       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:51092: use of closed network connection
	
	
	==> kube-controller-manager [b1a978b42d85e633cd2ba10344d19146a86bc0f5d53ccc17548f54d14e425d30] <==
	I1121 14:38:11.090309       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 14:38:11.090345       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 14:38:11.090404       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1121 14:38:11.090423       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:38:11.090654       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:38:11.090681       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 14:38:11.090743       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1121 14:38:11.090804       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 14:38:11.090922       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1121 14:38:11.091651       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1121 14:38:11.092599       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1121 14:38:11.093208       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 14:38:11.095571       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:38:11.098857       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 14:38:11.101161       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:38:11.104335       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 14:38:11.109594       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1121 14:38:11.116807       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:38:11.117118       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:38:11.121115       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 14:38:11.138592       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:38:11.138607       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 14:38:11.138614       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 14:38:11.140549       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 14:38:26.091058       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ca3d4ce80a282e8bd8da9244677adde78f9749c10b536c8a456b5c254be8c779] <==
	I1121 14:38:12.362754       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:38:12.447805       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:38:12.548779       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:38:12.548832       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 14:38:12.548943       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:38:12.568660       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:38:12.568716       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:38:12.574604       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:38:12.574979       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:38:12.574998       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:38:12.576399       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:38:12.576400       1 config.go:200] "Starting service config controller"
	I1121 14:38:12.576432       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:38:12.576437       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:38:12.576466       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:38:12.576472       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:38:12.576492       1 config.go:309] "Starting node config controller"
	I1121 14:38:12.576502       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:38:12.576509       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:38:12.677223       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:38:12.677246       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:38:12.677227       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [df62bd356ad9cd1ac534be004f01f27a8c5caa78d02e6c65a78057b5feeb7fd2] <==
	E1121 14:38:04.121501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:38:04.122514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:38:04.122659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:38:04.122749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:38:04.122861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:38:04.123248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:38:04.123409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:38:04.123470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:38:04.123633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:38:04.123788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:38:04.123812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:38:04.123848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 14:38:04.123867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:38:04.123961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:38:04.124105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:38:04.124118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:38:04.939026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:38:05.011258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:38:05.064751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:38:05.139638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1121 14:38:05.180104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:38:05.270518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:38:05.317583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:38:05.322741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1121 14:38:07.317922       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:38:07 no-preload-589411 kubelet[2296]: I1121 14:38:07.567383    2296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-589411" podStartSLOduration=1.567372034 podStartE2EDuration="1.567372034s" podCreationTimestamp="2025-11-21 14:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:38:07.567317221 +0000 UTC m=+1.107298125" watchObservedRunningTime="2025-11-21 14:38:07.567372034 +0000 UTC m=+1.107352930"
	Nov 21 14:38:07 no-preload-589411 kubelet[2296]: I1121 14:38:07.581410    2296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-589411" podStartSLOduration=1.581395722 podStartE2EDuration="1.581395722s" podCreationTimestamp="2025-11-21 14:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:38:07.574637343 +0000 UTC m=+1.114618246" watchObservedRunningTime="2025-11-21 14:38:07.581395722 +0000 UTC m=+1.121376620"
	Nov 21 14:38:07 no-preload-589411 kubelet[2296]: I1121 14:38:07.581514    2296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-589411" podStartSLOduration=1.581507049 podStartE2EDuration="1.581507049s" podCreationTimestamp="2025-11-21 14:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:38:07.581477132 +0000 UTC m=+1.121458033" watchObservedRunningTime="2025-11-21 14:38:07.581507049 +0000 UTC m=+1.121487954"
	Nov 21 14:38:07 no-preload-589411 kubelet[2296]: I1121 14:38:07.595489    2296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-589411" podStartSLOduration=1.595477625 podStartE2EDuration="1.595477625s" podCreationTimestamp="2025-11-21 14:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:38:07.588212342 +0000 UTC m=+1.128193244" watchObservedRunningTime="2025-11-21 14:38:07.595477625 +0000 UTC m=+1.135458519"
	Nov 21 14:38:11 no-preload-589411 kubelet[2296]: I1121 14:38:11.140646    2296 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 21 14:38:11 no-preload-589411 kubelet[2296]: I1121 14:38:11.141454    2296 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:38:12 no-preload-589411 kubelet[2296]: I1121 14:38:12.060025    2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b9a10bcb-1f11-4f8f-ab41-d14646b53a8b-kube-proxy\") pod \"kube-proxy-qhp5d\" (UID: \"b9a10bcb-1f11-4f8f-ab41-d14646b53a8b\") " pod="kube-system/kube-proxy-qhp5d"
	Nov 21 14:38:12 no-preload-589411 kubelet[2296]: I1121 14:38:12.060058    2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9a10bcb-1f11-4f8f-ab41-d14646b53a8b-xtables-lock\") pod \"kube-proxy-qhp5d\" (UID: \"b9a10bcb-1f11-4f8f-ab41-d14646b53a8b\") " pod="kube-system/kube-proxy-qhp5d"
	Nov 21 14:38:12 no-preload-589411 kubelet[2296]: I1121 14:38:12.060075    2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7tf8\" (UniqueName: \"kubernetes.io/projected/b9a10bcb-1f11-4f8f-ab41-d14646b53a8b-kube-api-access-g7tf8\") pod \"kube-proxy-qhp5d\" (UID: \"b9a10bcb-1f11-4f8f-ab41-d14646b53a8b\") " pod="kube-system/kube-proxy-qhp5d"
	Nov 21 14:38:12 no-preload-589411 kubelet[2296]: I1121 14:38:12.060267    2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/14686249-fa8b-404b-b56a-3826f8197d8f-cni-cfg\") pod \"kindnet-h7k2r\" (UID: \"14686249-fa8b-404b-b56a-3826f8197d8f\") " pod="kube-system/kindnet-h7k2r"
	Nov 21 14:38:12 no-preload-589411 kubelet[2296]: I1121 14:38:12.060296    2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14686249-fa8b-404b-b56a-3826f8197d8f-xtables-lock\") pod \"kindnet-h7k2r\" (UID: \"14686249-fa8b-404b-b56a-3826f8197d8f\") " pod="kube-system/kindnet-h7k2r"
	Nov 21 14:38:12 no-preload-589411 kubelet[2296]: I1121 14:38:12.060321    2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82244\" (UniqueName: \"kubernetes.io/projected/14686249-fa8b-404b-b56a-3826f8197d8f-kube-api-access-82244\") pod \"kindnet-h7k2r\" (UID: \"14686249-fa8b-404b-b56a-3826f8197d8f\") " pod="kube-system/kindnet-h7k2r"
	Nov 21 14:38:12 no-preload-589411 kubelet[2296]: I1121 14:38:12.060348    2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9a10bcb-1f11-4f8f-ab41-d14646b53a8b-lib-modules\") pod \"kube-proxy-qhp5d\" (UID: \"b9a10bcb-1f11-4f8f-ab41-d14646b53a8b\") " pod="kube-system/kube-proxy-qhp5d"
	Nov 21 14:38:12 no-preload-589411 kubelet[2296]: I1121 14:38:12.060369    2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14686249-fa8b-404b-b56a-3826f8197d8f-lib-modules\") pod \"kindnet-h7k2r\" (UID: \"14686249-fa8b-404b-b56a-3826f8197d8f\") " pod="kube-system/kindnet-h7k2r"
	Nov 21 14:38:12 no-preload-589411 kubelet[2296]: I1121 14:38:12.570224    2296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qhp5d" podStartSLOduration=1.570202179 podStartE2EDuration="1.570202179s" podCreationTimestamp="2025-11-21 14:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:38:12.570031719 +0000 UTC m=+6.110012835" watchObservedRunningTime="2025-11-21 14:38:12.570202179 +0000 UTC m=+6.110183081"
	Nov 21 14:38:15 no-preload-589411 kubelet[2296]: I1121 14:38:15.603163    2296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-h7k2r" podStartSLOduration=1.920460265 podStartE2EDuration="4.603141864s" podCreationTimestamp="2025-11-21 14:38:11 +0000 UTC" firstStartedPulling="2025-11-21 14:38:12.270317155 +0000 UTC m=+5.810298037" lastFinishedPulling="2025-11-21 14:38:14.952998739 +0000 UTC m=+8.492979636" observedRunningTime="2025-11-21 14:38:15.603010732 +0000 UTC m=+9.142991634" watchObservedRunningTime="2025-11-21 14:38:15.603141864 +0000 UTC m=+9.143122766"
	Nov 21 14:38:25 no-preload-589411 kubelet[2296]: I1121 14:38:25.549970    2296 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 21 14:38:25 no-preload-589411 kubelet[2296]: I1121 14:38:25.653471    2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbkx9\" (UniqueName: \"kubernetes.io/projected/20ec3fff-ac51-4616-85e6-b8c2ccae71a0-kube-api-access-sbkx9\") pod \"coredns-66bc5c9577-db94z\" (UID: \"20ec3fff-ac51-4616-85e6-b8c2ccae71a0\") " pod="kube-system/coredns-66bc5c9577-db94z"
	Nov 21 14:38:25 no-preload-589411 kubelet[2296]: I1121 14:38:25.653511    2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dcrk\" (UniqueName: \"kubernetes.io/projected/5e36e1b3-cb3b-4dad-bb3d-a72ba97ff991-kube-api-access-9dcrk\") pod \"storage-provisioner\" (UID: \"5e36e1b3-cb3b-4dad-bb3d-a72ba97ff991\") " pod="kube-system/storage-provisioner"
	Nov 21 14:38:25 no-preload-589411 kubelet[2296]: I1121 14:38:25.653527    2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20ec3fff-ac51-4616-85e6-b8c2ccae71a0-config-volume\") pod \"coredns-66bc5c9577-db94z\" (UID: \"20ec3fff-ac51-4616-85e6-b8c2ccae71a0\") " pod="kube-system/coredns-66bc5c9577-db94z"
	Nov 21 14:38:25 no-preload-589411 kubelet[2296]: I1121 14:38:25.653547    2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5e36e1b3-cb3b-4dad-bb3d-a72ba97ff991-tmp\") pod \"storage-provisioner\" (UID: \"5e36e1b3-cb3b-4dad-bb3d-a72ba97ff991\") " pod="kube-system/storage-provisioner"
	Nov 21 14:38:26 no-preload-589411 kubelet[2296]: I1121 14:38:26.602916    2296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.602897789 podStartE2EDuration="14.602897789s" podCreationTimestamp="2025-11-21 14:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:38:26.602756386 +0000 UTC m=+20.142737289" watchObservedRunningTime="2025-11-21 14:38:26.602897789 +0000 UTC m=+20.142878690"
	Nov 21 14:38:26 no-preload-589411 kubelet[2296]: I1121 14:38:26.612045    2296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-db94z" podStartSLOduration=14.612027703999999 podStartE2EDuration="14.612027704s" podCreationTimestamp="2025-11-21 14:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:38:26.611786722 +0000 UTC m=+20.151767625" watchObservedRunningTime="2025-11-21 14:38:26.612027704 +0000 UTC m=+20.152008605"
	Nov 21 14:38:28 no-preload-589411 kubelet[2296]: I1121 14:38:28.671697    2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww6h5\" (UniqueName: \"kubernetes.io/projected/00913493-ebe3-475f-bad9-5f049f9a6389-kube-api-access-ww6h5\") pod \"busybox\" (UID: \"00913493-ebe3-475f-bad9-5f049f9a6389\") " pod="default/busybox"
	Nov 21 14:38:30 no-preload-589411 kubelet[2296]: I1121 14:38:30.614248    2296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.855535654 podStartE2EDuration="2.614230268s" podCreationTimestamp="2025-11-21 14:38:28 +0000 UTC" firstStartedPulling="2025-11-21 14:38:28.883767501 +0000 UTC m=+22.423748381" lastFinishedPulling="2025-11-21 14:38:29.642462099 +0000 UTC m=+23.182442995" observedRunningTime="2025-11-21 14:38:30.613928672 +0000 UTC m=+24.153909573" watchObservedRunningTime="2025-11-21 14:38:30.614230268 +0000 UTC m=+24.154211172"
	
	
	==> storage-provisioner [366a478e349324ed5f5dfb673ec1776cb7b1ad629e8bfe6ceeb06a0f8b118968] <==
	I1121 14:38:25.936618       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:38:25.943623       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:38:25.943674       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 14:38:25.945605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:38:25.951207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:38:25.951426       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:38:25.951578       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f02b2028-b558-4c82-b860-22cca0fa7d7b", APIVersion:"v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-589411_f540bc28-43fa-4c38-8431-4605a6a59355 became leader
	I1121 14:38:25.951722       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-589411_f540bc28-43fa-4c38-8431-4605a6a59355!
	W1121 14:38:25.953937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:38:25.956907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:38:26.052629       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-589411_f540bc28-43fa-4c38-8431-4605a6a59355!
	W1121 14:38:27.960665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:38:27.965304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:38:29.968618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:38:29.972400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:38:31.975001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:38:31.979824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:38:33.982441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:38:33.986132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:38:35.989157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:38:35.993470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:38:37.996572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:38:38.000386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-589411 -n no-preload-589411
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-589411 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-794941 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-794941 --alsologtostderr -v=1: exit status 80 (1.744223739s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-794941 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:38:54.075499  251295 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:38:54.075741  251295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:38:54.075751  251295 out.go:374] Setting ErrFile to fd 2...
	I1121 14:38:54.075755  251295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:38:54.075946  251295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:38:54.076159  251295 out.go:368] Setting JSON to false
	I1121 14:38:54.076180  251295 mustload.go:66] Loading cluster: old-k8s-version-794941
	I1121 14:38:54.076487  251295 config.go:182] Loaded profile config "old-k8s-version-794941": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1121 14:38:54.076882  251295 cli_runner.go:164] Run: docker container inspect old-k8s-version-794941 --format={{.State.Status}}
	I1121 14:38:54.094854  251295 host.go:66] Checking if "old-k8s-version-794941" exists ...
	I1121 14:38:54.095087  251295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:38:54.151278  251295 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-21 14:38:54.141848135 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:38:54.151878  251295 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-794941 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1121 14:38:54.153800  251295 out.go:179] * Pausing node old-k8s-version-794941 ... 
	I1121 14:38:54.154936  251295 host.go:66] Checking if "old-k8s-version-794941" exists ...
	I1121 14:38:54.155173  251295 ssh_runner.go:195] Run: systemctl --version
	I1121 14:38:54.155212  251295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-794941
	I1121 14:38:54.171385  251295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/old-k8s-version-794941/id_rsa Username:docker}
	I1121 14:38:54.264605  251295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:38:54.275958  251295 pause.go:52] kubelet running: true
	I1121 14:38:54.276033  251295 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:38:54.425967  251295 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:38:54.426067  251295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:38:54.487866  251295 cri.go:89] found id: "4ab4be0d66cf2f3b88b6055f3583c7696ad3a4d9514e845bd33fba43590dcd21"
	I1121 14:38:54.487888  251295 cri.go:89] found id: "b23cc7e23724e9a3c16a0b5ddd305db2a7dbec8d3ac78fc1a54bc3bf1a179ad0"
	I1121 14:38:54.487892  251295 cri.go:89] found id: "1f61fa83329fef694908c75e518a0913b06691afd9040978f849e56c23e5e16d"
	I1121 14:38:54.487896  251295 cri.go:89] found id: "3bd64c70ef61aa1b66639aa8514d2672609ec76928e1cb2edc4b87e7935f0879"
	I1121 14:38:54.487898  251295 cri.go:89] found id: "fed082a62a98e6cf91511c92c50665b154e1cc4e2eb218f55bee3856ca0f4a01"
	I1121 14:38:54.487901  251295 cri.go:89] found id: "17a449a6e88003887696d16299028555a4ecfd3f1608112d0eea538b806b73b5"
	I1121 14:38:54.487904  251295 cri.go:89] found id: "47db37c9bce4dd61dd2ba64e670f76cd77726767ef4c3182af1d5a94ede3419e"
	I1121 14:38:54.487906  251295 cri.go:89] found id: "28544b475c8231677d3576e4a5811ded27f16bd71f8c79e10c8f00528a254273"
	I1121 14:38:54.487908  251295 cri.go:89] found id: "4d8b103d6a3b58640169648cc2590ab988fe248cbc99f9be97c587bb1abcbb50"
	I1121 14:38:54.487914  251295 cri.go:89] found id: "2150a09e5bcb1bf08af2f5cf2b464ade235c8519f224e03da1a9c2e61df779e0"
	I1121 14:38:54.487916  251295 cri.go:89] found id: ""
	I1121 14:38:54.487973  251295 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:38:54.499109  251295 retry.go:31] will retry after 297.140726ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:38:54Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:38:54.796592  251295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:38:54.808889  251295 pause.go:52] kubelet running: false
	I1121 14:38:54.808930  251295 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:38:54.945212  251295 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:38:54.945292  251295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:38:55.017394  251295 cri.go:89] found id: "4ab4be0d66cf2f3b88b6055f3583c7696ad3a4d9514e845bd33fba43590dcd21"
	I1121 14:38:55.017417  251295 cri.go:89] found id: "b23cc7e23724e9a3c16a0b5ddd305db2a7dbec8d3ac78fc1a54bc3bf1a179ad0"
	I1121 14:38:55.017421  251295 cri.go:89] found id: "1f61fa83329fef694908c75e518a0913b06691afd9040978f849e56c23e5e16d"
	I1121 14:38:55.017425  251295 cri.go:89] found id: "3bd64c70ef61aa1b66639aa8514d2672609ec76928e1cb2edc4b87e7935f0879"
	I1121 14:38:55.017427  251295 cri.go:89] found id: "fed082a62a98e6cf91511c92c50665b154e1cc4e2eb218f55bee3856ca0f4a01"
	I1121 14:38:55.017430  251295 cri.go:89] found id: "17a449a6e88003887696d16299028555a4ecfd3f1608112d0eea538b806b73b5"
	I1121 14:38:55.017433  251295 cri.go:89] found id: "47db37c9bce4dd61dd2ba64e670f76cd77726767ef4c3182af1d5a94ede3419e"
	I1121 14:38:55.017437  251295 cri.go:89] found id: "28544b475c8231677d3576e4a5811ded27f16bd71f8c79e10c8f00528a254273"
	I1121 14:38:55.017440  251295 cri.go:89] found id: "4d8b103d6a3b58640169648cc2590ab988fe248cbc99f9be97c587bb1abcbb50"
	I1121 14:38:55.017454  251295 cri.go:89] found id: "2150a09e5bcb1bf08af2f5cf2b464ade235c8519f224e03da1a9c2e61df779e0"
	I1121 14:38:55.017469  251295 cri.go:89] found id: ""
	I1121 14:38:55.017509  251295 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:38:55.030316  251295 retry.go:31] will retry after 474.756092ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:38:55Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:38:55.505885  251295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:38:55.519962  251295 pause.go:52] kubelet running: false
	I1121 14:38:55.520015  251295 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:38:55.669826  251295 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:38:55.669908  251295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:38:55.733748  251295 cri.go:89] found id: "4ab4be0d66cf2f3b88b6055f3583c7696ad3a4d9514e845bd33fba43590dcd21"
	I1121 14:38:55.733766  251295 cri.go:89] found id: "b23cc7e23724e9a3c16a0b5ddd305db2a7dbec8d3ac78fc1a54bc3bf1a179ad0"
	I1121 14:38:55.733770  251295 cri.go:89] found id: "1f61fa83329fef694908c75e518a0913b06691afd9040978f849e56c23e5e16d"
	I1121 14:38:55.733773  251295 cri.go:89] found id: "3bd64c70ef61aa1b66639aa8514d2672609ec76928e1cb2edc4b87e7935f0879"
	I1121 14:38:55.733777  251295 cri.go:89] found id: "fed082a62a98e6cf91511c92c50665b154e1cc4e2eb218f55bee3856ca0f4a01"
	I1121 14:38:55.733783  251295 cri.go:89] found id: "17a449a6e88003887696d16299028555a4ecfd3f1608112d0eea538b806b73b5"
	I1121 14:38:55.733786  251295 cri.go:89] found id: "47db37c9bce4dd61dd2ba64e670f76cd77726767ef4c3182af1d5a94ede3419e"
	I1121 14:38:55.733791  251295 cri.go:89] found id: "28544b475c8231677d3576e4a5811ded27f16bd71f8c79e10c8f00528a254273"
	I1121 14:38:55.733795  251295 cri.go:89] found id: "4d8b103d6a3b58640169648cc2590ab988fe248cbc99f9be97c587bb1abcbb50"
	I1121 14:38:55.733810  251295 cri.go:89] found id: "2150a09e5bcb1bf08af2f5cf2b464ade235c8519f224e03da1a9c2e61df779e0"
	I1121 14:38:55.733820  251295 cri.go:89] found id: ""
	I1121 14:38:55.733855  251295 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:38:55.749641  251295 out.go:203] 
	W1121 14:38:55.751031  251295 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:38:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:38:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 14:38:55.751051  251295 out.go:285] * 
	* 
	W1121 14:38:55.755712  251295 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 14:38:55.758664  251295 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-794941 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-794941
helpers_test.go:243: (dbg) docker inspect old-k8s-version-794941:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b81aa4f3bb48453daf4fe7a0508db9821cc8705861ed4feea1a5eeb7c75c5ce3",
	        "Created": "2025-11-21T14:37:02.714934052Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246179,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:38:15.345551484Z",
	            "FinishedAt": "2025-11-21T14:38:14.103904339Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/b81aa4f3bb48453daf4fe7a0508db9821cc8705861ed4feea1a5eeb7c75c5ce3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b81aa4f3bb48453daf4fe7a0508db9821cc8705861ed4feea1a5eeb7c75c5ce3/hostname",
	        "HostsPath": "/var/lib/docker/containers/b81aa4f3bb48453daf4fe7a0508db9821cc8705861ed4feea1a5eeb7c75c5ce3/hosts",
	        "LogPath": "/var/lib/docker/containers/b81aa4f3bb48453daf4fe7a0508db9821cc8705861ed4feea1a5eeb7c75c5ce3/b81aa4f3bb48453daf4fe7a0508db9821cc8705861ed4feea1a5eeb7c75c5ce3-json.log",
	        "Name": "/old-k8s-version-794941",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-794941:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-794941",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b81aa4f3bb48453daf4fe7a0508db9821cc8705861ed4feea1a5eeb7c75c5ce3",
	                "LowerDir": "/var/lib/docker/overlay2/c94fc1564c9f1d7c0d4997f74e9d5cf1f54d181439b877d5c418725371d7e094-init/diff:/var/lib/docker/overlay2/52a35d389e2d282a009a54c52e3f2c3f22a4d7ab5a2644a29d16127d21682576/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c94fc1564c9f1d7c0d4997f74e9d5cf1f54d181439b877d5c418725371d7e094/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c94fc1564c9f1d7c0d4997f74e9d5cf1f54d181439b877d5c418725371d7e094/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c94fc1564c9f1d7c0d4997f74e9d5cf1f54d181439b877d5c418725371d7e094/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-794941",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-794941/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-794941",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-794941",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-794941",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a36be8d68922de7cde45e5d70f4d4d43b036c98524f8ed7d783c5391efc2579b",
	            "SandboxKey": "/var/run/docker/netns/a36be8d68922",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-794941": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fcd2cf468008e08589bfa63705aa450680f6e45d22486fee930702c79b4654b7",
	                    "EndpointID": "40ace85739042613d2f4a3ddcc55102d29e0c2c0b01c5e175609b6dd4c2df9f5",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "46:55:6d:05:52:6a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-794941",
	                        "b81aa4f3bb48"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-794941 -n old-k8s-version-794941
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-794941 -n old-k8s-version-794941: exit status 2 (336.38556ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-794941 logs -n 25
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ node    │ multinode-384928 node start m03 -v=5 --alsologtostderr                                                                                                                                                                                        │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ node    │ list -p multinode-384928                                                                                                                                                                                                                      │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ stop    │ -p multinode-384928                                                                                                                                                                                                                           │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p cert-expiration-046125 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-046125   │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ start   │ -p cert-options-116734 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ delete  │ -p force-systemd-env-653926                                                                                                                                                                                                                   │ force-systemd-env-653926 │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ start   │ -p pause-738756 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                                     │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:37 UTC │
	│ ssh     │ cert-options-116734 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ ssh     │ -p cert-options-116734 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ delete  │ -p cert-options-116734                                                                                                                                                                                                                        │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ start   │ -p old-k8s-version-794941 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:37 UTC │
	│ start   │ -p pause-738756 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:37 UTC │
	│ pause   │ -p pause-738756 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │                     │
	│ delete  │ -p pause-738756                                                                                                                                                                                                                               │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:37 UTC │
	│ start   │ -p no-preload-589411 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589411        │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:38 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-794941 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │                     │
	│ stop    │ -p old-k8s-version-794941 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:38 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-794941 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ start   │ -p old-k8s-version-794941 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ addons  │ enable metrics-server -p no-preload-589411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-589411        │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │                     │
	│ stop    │ -p no-preload-589411 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-589411        │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ image   │ old-k8s-version-794941 image list --format=json                                                                                                                                                                                               │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ pause   │ -p old-k8s-version-794941 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-589411 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-589411        │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ start   │ -p no-preload-589411 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589411        │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:38:55
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:38:55.420138  251689 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:38:55.420220  251689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:38:55.420228  251689 out.go:374] Setting ErrFile to fd 2...
	I1121 14:38:55.420232  251689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:38:55.420418  251689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:38:55.421024  251689 out.go:368] Setting JSON to false
	I1121 14:38:55.422594  251689 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4884,"bootTime":1763731051,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:38:55.422673  251689 start.go:143] virtualization: kvm guest
	I1121 14:38:55.424271  251689 out.go:179] * [no-preload-589411] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:38:55.425711  251689 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:38:55.425718  251689 notify.go:221] Checking for updates...
	I1121 14:38:55.427805  251689 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:38:55.428840  251689 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:38:55.429833  251689 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 14:38:55.430814  251689 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:38:55.431805  251689 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:38:55.433105  251689 config.go:182] Loaded profile config "no-preload-589411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:38:55.433535  251689 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:38:55.457103  251689 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:38:55.457185  251689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:38:55.516678  251689 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:38:55.506493662 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:38:55.516821  251689 docker.go:319] overlay module found
	I1121 14:38:55.518381  251689 out.go:179] * Using the docker driver based on existing profile
	I1121 14:38:55.519383  251689 start.go:309] selected driver: docker
	I1121 14:38:55.519398  251689 start.go:930] validating driver "docker" against &{Name:no-preload-589411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589411 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:38:55.519485  251689 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:38:55.520230  251689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:38:55.583998  251689 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:38:55.574184817 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:38:55.584314  251689 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:38:55.584348  251689 cni.go:84] Creating CNI manager for ""
	I1121 14:38:55.584407  251689 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:38:55.584491  251689 start.go:353] cluster config:
	{Name:no-preload-589411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:38:55.586040  251689 out.go:179] * Starting "no-preload-589411" primary control-plane node in "no-preload-589411" cluster
	I1121 14:38:55.587092  251689 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:38:55.588115  251689 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:38:55.589199  251689 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:38:55.589237  251689 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:38:55.589296  251689 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/no-preload-589411/config.json ...
	I1121 14:38:55.589441  251689 cache.go:107] acquiring lock: {Name:mke75466844e5b5d026463813774c1f728aaddeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:38:55.589441  251689 cache.go:107] acquiring lock: {Name:mkd98d9687b2082e3f3e88c7fade59999fdecf44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:38:55.589488  251689 cache.go:107] acquiring lock: {Name:mk44ffbe1b30798f442309f17630d5f372940d03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:38:55.589521  251689 cache.go:107] acquiring lock: {Name:mke34bf7a39c66927fe2657ec23445f04ebabbb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:38:55.589524  251689 cache.go:107] acquiring lock: {Name:mk16a0f56ae9b12023a6268ab9e2e14cd775531c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:38:55.589585  251689 cache.go:115] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1121 14:38:55.589527  251689 cache.go:107] acquiring lock: {Name:mkd4a76239f4b71fdf99ac5a759cd01897368f7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:38:55.589598  251689 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 130.206µs
	I1121 14:38:55.589571  251689 cache.go:107] acquiring lock: {Name:mk42daec646d706ae0683942a66a6acc7e89145d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:38:55.589608  251689 cache.go:115] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1121 14:38:55.589612  251689 cache.go:115] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1121 14:38:55.589623  251689 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 99.516µs
	I1121 14:38:55.589632  251689 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1121 14:38:55.589614  251689 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1121 14:38:55.589590  251689 cache.go:115] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1121 14:38:55.589613  251689 cache.go:107] acquiring lock: {Name:mkd096b3a3fa30971ac4cf9acc7857a7ffd9853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:38:55.589656  251689 cache.go:115] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1121 14:38:55.589665  251689 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 232.986µs
	I1121 14:38:55.589666  251689 cache.go:115] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1121 14:38:55.589673  251689 cache.go:115] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1121 14:38:55.589675  251689 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1121 14:38:55.589646  251689 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 127.144µs
	I1121 14:38:55.589682  251689 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 197.433µs
	I1121 14:38:55.589687  251689 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1121 14:38:55.589690  251689 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1121 14:38:55.589683  251689 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 129.75µs
	I1121 14:38:55.589699  251689 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1121 14:38:55.589624  251689 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 201.087µs
	I1121 14:38:55.589717  251689 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1121 14:38:55.589719  251689 cache.go:115] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1121 14:38:55.589728  251689 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 177.595µs
	I1121 14:38:55.589749  251689 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1121 14:38:55.589757  251689 cache.go:87] Successfully saved all images to host disk.
	I1121 14:38:55.609379  251689 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:38:55.609398  251689 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:38:55.609415  251689 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:38:55.609438  251689 start.go:360] acquireMachinesLock for no-preload-589411: {Name:mk828f66be6805be79eae119877f5f43d8b19d75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:38:55.609493  251689 start.go:364] duration metric: took 39.279µs to acquireMachinesLock for "no-preload-589411"
	I1121 14:38:55.609511  251689 start.go:96] Skipping create...Using existing machine configuration
	I1121 14:38:55.609520  251689 fix.go:54] fixHost starting: 
	I1121 14:38:55.609846  251689 cli_runner.go:164] Run: docker container inspect no-preload-589411 --format={{.State.Status}}
	I1121 14:38:55.625520  251689 fix.go:112] recreateIfNeeded on no-preload-589411: state=Stopped err=<nil>
	W1121 14:38:55.625543  251689 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 21 14:38:40 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:40.317096054Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1524033b1f1a2e5413e86bf8d9159ca760e109c78d3a6bf13999d99d76ca0b7d/merged/etc/group: no such file or directory"
	Nov 21 14:38:40 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:40.317496558Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:38:40 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:40.335645227Z" level=info msg="Created container 2150a09e5bcb1bf08af2f5cf2b464ade235c8519f224e03da1a9c2e61df779e0: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lv25l/kubernetes-dashboard" id=3da5e721-9cd1-46b6-ac42-bd513bbfa88c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:38:40 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:40.336273183Z" level=info msg="Starting container: 2150a09e5bcb1bf08af2f5cf2b464ade235c8519f224e03da1a9c2e61df779e0" id=b5adf999-aa33-4156-aacf-c8d7a5d2f910 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:38:40 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:40.338477713Z" level=info msg="Started container" PID=1508 containerID=2150a09e5bcb1bf08af2f5cf2b464ade235c8519f224e03da1a9c2e61df779e0 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lv25l/kubernetes-dashboard id=b5adf999-aa33-4156-aacf-c8d7a5d2f910 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1cfaa0204ff7c375a6c5e1335c73aa4bc0234738f1ad6ae3223b5ef7d26c090d
	Nov 21 14:38:42 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:42.563952354Z" level=info msg="Pulled image: registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=2c84f797-f53b-4b5e-a60f-cb5681d61b77 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:38:42 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:42.56468972Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=98e9f044-1189-4d94-9bde-52193b55f062 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:38:42 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:42.567205192Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2/dashboard-metrics-scraper" id=a4485f1c-95ad-47fd-a072-344b0dff7989 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:38:42 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:42.567328525Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:38:42 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:42.574919849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:38:42 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:42.575587739Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:38:42 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:42.595169317Z" level=info msg="Created container e3a314f603b1c71bd9fcfd77fa37834ce69ba249f39a060e2d3e2d715736b411: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2/dashboard-metrics-scraper" id=a4485f1c-95ad-47fd-a072-344b0dff7989 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:38:42 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:42.595628376Z" level=info msg="Starting container: e3a314f603b1c71bd9fcfd77fa37834ce69ba249f39a060e2d3e2d715736b411" id=4e434473-c505-4567-984f-4a7816f19aba name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:38:42 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:42.597169262Z" level=info msg="Started container" PID=1737 containerID=e3a314f603b1c71bd9fcfd77fa37834ce69ba249f39a060e2d3e2d715736b411 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2/dashboard-metrics-scraper id=4e434473-c505-4567-984f-4a7816f19aba name=/runtime.v1.RuntimeService/StartContainer sandboxID=e38761546fd4b20289aac33d339edd859e1e6d788210ea6c9aa7c1248fde6e82
	Nov 21 14:38:43 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:43.619493011Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=713ca127-5338-4c57-b347-c42f9995f6a6 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:38:43 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:43.622303068Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7d14b3ec-8532-4f13-aacc-b5e6f03fc1fc name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:38:43 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:43.625421379Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2/dashboard-metrics-scraper" id=73c06c58-df1e-4545-9ee8-c62c12a165d3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:38:43 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:43.625579009Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:38:43 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:43.63464829Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:38:43 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:43.635341323Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:38:43 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:43.670137431Z" level=info msg="Created container 4d8b103d6a3b58640169648cc2590ab988fe248cbc99f9be97c587bb1abcbb50: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2/dashboard-metrics-scraper" id=73c06c58-df1e-4545-9ee8-c62c12a165d3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:38:43 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:43.670682698Z" level=info msg="Starting container: 4d8b103d6a3b58640169648cc2590ab988fe248cbc99f9be97c587bb1abcbb50" id=3f5fdbdc-55df-404b-bd12-86c09012f432 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:38:43 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:43.672369079Z" level=info msg="Started container" PID=1761 containerID=4d8b103d6a3b58640169648cc2590ab988fe248cbc99f9be97c587bb1abcbb50 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2/dashboard-metrics-scraper id=3f5fdbdc-55df-404b-bd12-86c09012f432 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e38761546fd4b20289aac33d339edd859e1e6d788210ea6c9aa7c1248fde6e82
	Nov 21 14:38:44 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:44.624219254Z" level=info msg="Removing container: e3a314f603b1c71bd9fcfd77fa37834ce69ba249f39a060e2d3e2d715736b411" id=114c3f4e-2225-4584-965b-dcc838f5e880 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 14:38:44 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:44.637548474Z" level=info msg="Removed container e3a314f603b1c71bd9fcfd77fa37834ce69ba249f39a060e2d3e2d715736b411: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2/dashboard-metrics-scraper" id=114c3f4e-2225-4584-965b-dcc838f5e880 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	4d8b103d6a3b5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   1                   e38761546fd4b       dashboard-metrics-scraper-5f989dc9cf-27sz2       kubernetes-dashboard
	2150a09e5bcb1       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   16 seconds ago      Running             kubernetes-dashboard        0                   1cfaa0204ff7c       kubernetes-dashboard-8694d4445c-lv25l            kubernetes-dashboard
	4ab4be0d66cf2       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           28 seconds ago      Running             coredns                     0                   2d4641210135e       coredns-5dd5756b68-h4xjd                         kube-system
	597ff795a9610       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           28 seconds ago      Running             busybox                     1                   b6668d4d37337       busybox                                          default
	b23cc7e23724e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           31 seconds ago      Exited              storage-provisioner         0                   3246ec2c676c5       storage-provisioner                              kube-system
	1f61fa83329fe       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           31 seconds ago      Running             kube-proxy                  0                   cfc42244a6ab8       kube-proxy-w4rcg                                 kube-system
	3bd64c70ef61a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           31 seconds ago      Running             kindnet-cni                 0                   a078d24c3b03a       kindnet-9pjsf                                    kube-system
	fed082a62a98e       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           34 seconds ago      Running             kube-apiserver              0                   37122919829de       kube-apiserver-old-k8s-version-794941            kube-system
	17a449a6e8800       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           34 seconds ago      Running             kube-controller-manager     0                   2bf10298ea9b0       kube-controller-manager-old-k8s-version-794941   kube-system
	47db37c9bce4d       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           34 seconds ago      Running             kube-scheduler              0                   18040a6f08d7e       kube-scheduler-old-k8s-version-794941            kube-system
	28544b475c823       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           34 seconds ago      Running             etcd                        0                   645d9dd3979da       etcd-old-k8s-version-794941                      kube-system
	
	
	==> coredns [4ab4be0d66cf2f3b88b6055f3583c7696ad3a4d9514e845bd33fba43590dcd21] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51571 - 62285 "HINFO IN 1393753478946371822.3360243421546352995. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.471916445s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-794941
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-794941
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=old-k8s-version-794941
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_37_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:37:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-794941
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:38:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:38:34 +0000   Fri, 21 Nov 2025 14:37:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:38:34 +0000   Fri, 21 Nov 2025 14:37:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:38:34 +0000   Fri, 21 Nov 2025 14:37:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:38:34 +0000   Fri, 21 Nov 2025 14:38:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-794941
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                7ff44db4-ba6f-408c-b662-b0a6f3e0bc74
	  Boot ID:                    4deed74e-d1ae-403f-b303-7338c681df31
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 coredns-5dd5756b68-h4xjd                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     84s
	  kube-system                 etcd-old-k8s-version-794941                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         97s
	  kube-system                 kindnet-9pjsf                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      84s
	  kube-system                 kube-apiserver-old-k8s-version-794941             250m (3%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-controller-manager-old-k8s-version-794941    200m (2%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-proxy-w4rcg                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-scheduler-old-k8s-version-794941             100m (1%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-27sz2        0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-lv25l             0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 84s                kube-proxy       
	  Normal  Starting                 31s                kube-proxy       
	  Normal  NodeHasSufficientMemory  97s                kubelet          Node old-k8s-version-794941 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s                kubelet          Node old-k8s-version-794941 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s                kubelet          Node old-k8s-version-794941 status is now: NodeHasSufficientPID
	  Normal  Starting                 97s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           85s                node-controller  Node old-k8s-version-794941 event: Registered Node old-k8s-version-794941 in Controller
	  Normal  NodeReady                71s                kubelet          Node old-k8s-version-794941 status is now: NodeReady
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node old-k8s-version-794941 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node old-k8s-version-794941 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x8 over 35s)  kubelet          Node old-k8s-version-794941 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           20s                node-controller  Node old-k8s-version-794941 event: Registered Node old-k8s-version-794941 in Controller
	
	
	==> dmesg <==
	[  +0.087005] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024870] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.407014] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 13:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.052324] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023964] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +2.047715] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +4.031557] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +8.511113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[ +16.382337] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[Nov21 13:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	
	
	==> etcd [28544b475c8231677d3576e4a5811ded27f16bd71f8c79e10c8f00528a254273] <==
	{"level":"info","ts":"2025-11-21T14:38:22.095621Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-21T14:38:22.095634Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-21T14:38:22.095945Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-11-21T14:38:22.096039Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-11-21T14:38:22.096156Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:38:22.096192Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:38:22.098416Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-21T14:38:22.099777Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-21T14:38:22.099827Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-21T14:38:22.099954Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-21T14:38:22.100021Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-21T14:38:23.38905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-21T14:38:23.389108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-21T14:38:23.389136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-21T14:38:23.38915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-11-21T14:38:23.389155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-11-21T14:38:23.389163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-11-21T14:38:23.389173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-11-21T14:38:23.391023Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-21T14:38:23.391037Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-21T14:38:23.391023Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-794941 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-21T14:38:23.391298Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-21T14:38:23.391333Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-21T14:38:23.39214Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-21T14:38:23.392236Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 14:38:56 up  1:21,  0 user,  load average: 1.88, 2.22, 1.56
	Linux old-k8s-version-794941 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3bd64c70ef61aa1b66639aa8514d2672609ec76928e1cb2edc4b87e7935f0879] <==
	I1121 14:38:25.157543       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:38:25.157860       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1121 14:38:25.157977       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:38:25.157993       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:38:25.158011       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:38:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:38:25.384458       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:38:25.384494       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:38:25.384524       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:38:25.384713       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:38:25.785703       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:38:25.788059       1 metrics.go:72] Registering metrics
	I1121 14:38:25.788434       1 controller.go:711] "Syncing nftables rules"
	I1121 14:38:35.359478       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1121 14:38:35.359543       1 main.go:301] handling current node
	I1121 14:38:45.360034       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1121 14:38:45.360063       1 main.go:301] handling current node
	I1121 14:38:55.365288       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1121 14:38:55.365330       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fed082a62a98e6cf91511c92c50665b154e1cc4e2eb218f55bee3856ca0f4a01] <==
	I1121 14:38:24.388333       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1121 14:38:24.388345       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1121 14:38:24.388501       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1121 14:38:24.388542       1 aggregator.go:166] initial CRD sync complete...
	I1121 14:38:24.388553       1 autoregister_controller.go:141] Starting autoregister controller
	I1121 14:38:24.388581       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:38:24.388589       1 cache.go:39] Caches are synced for autoregister controller
	I1121 14:38:24.388352       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1121 14:38:24.388355       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1121 14:38:24.388375       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1121 14:38:24.388391       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1121 14:38:24.388392       1 shared_informer.go:318] Caches are synced for configmaps
	I1121 14:38:24.433772       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:38:25.154060       1 controller.go:624] quota admission added evaluator for: namespaces
	I1121 14:38:25.184824       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1121 14:38:25.201194       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:38:25.207275       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:38:25.215892       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1121 14:38:25.247963       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.184.28"}
	I1121 14:38:25.261170       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.130.235"}
	I1121 14:38:25.288784       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:38:36.523127       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1121 14:38:36.538643       1 controller.go:624] quota admission added evaluator for: endpoints
	I1121 14:38:36.581138       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:38:36.581140       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [17a449a6e88003887696d16299028555a4ecfd3f1608112d0eea538b806b73b5] <==
	I1121 14:38:36.545437       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-lv25l"
	I1121 14:38:36.547228       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-27sz2"
	I1121 14:38:36.554069       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="24.330902ms"
	I1121 14:38:36.554287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="23.424967ms"
	I1121 14:38:36.559639       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.431509ms"
	I1121 14:38:36.559722       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.346µs"
	I1121 14:38:36.561027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="6.70214ms"
	I1121 14:38:36.561118       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.242µs"
	I1121 14:38:36.561896       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="36.777µs"
	I1121 14:38:36.575099       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.52µs"
	I1121 14:38:36.617331       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1121 14:38:36.635356       1 shared_informer.go:318] Caches are synced for disruption
	I1121 14:38:36.656479       1 shared_informer.go:318] Caches are synced for resource quota
	I1121 14:38:36.740511       1 shared_informer.go:318] Caches are synced for resource quota
	I1121 14:38:37.060839       1 shared_informer.go:318] Caches are synced for garbage collector
	I1121 14:38:37.061931       1 shared_informer.go:318] Caches are synced for garbage collector
	I1121 14:38:37.061959       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1121 14:38:40.629269       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.38502ms"
	I1121 14:38:40.629935       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="83.64µs"
	I1121 14:38:42.628736       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="5.019742ms"
	I1121 14:38:42.628829       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="48.582µs"
	I1121 14:38:43.636435       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="6.224029ms"
	I1121 14:38:43.636740       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.839µs"
	I1121 14:38:44.632614       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.404µs"
	I1121 14:38:45.634096       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.035µs"
	
	
	==> kube-proxy [1f61fa83329fef694908c75e518a0913b06691afd9040978f849e56c23e5e16d] <==
	I1121 14:38:24.943244       1 server_others.go:69] "Using iptables proxy"
	I1121 14:38:24.954405       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1121 14:38:24.973995       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:38:24.978345       1 server_others.go:152] "Using iptables Proxier"
	I1121 14:38:24.978377       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1121 14:38:24.978385       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1121 14:38:24.978407       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1121 14:38:24.978660       1 server.go:846] "Version info" version="v1.28.0"
	I1121 14:38:24.978677       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:38:24.979289       1 config.go:188] "Starting service config controller"
	I1121 14:38:24.979321       1 config.go:97] "Starting endpoint slice config controller"
	I1121 14:38:24.979327       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1121 14:38:24.979331       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1121 14:38:24.979352       1 config.go:315] "Starting node config controller"
	I1121 14:38:24.979380       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1121 14:38:25.080049       1 shared_informer.go:318] Caches are synced for service config
	I1121 14:38:25.080077       1 shared_informer.go:318] Caches are synced for node config
	I1121 14:38:25.080062       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [47db37c9bce4dd61dd2ba64e670f76cd77726767ef4c3182af1d5a94ede3419e] <==
	I1121 14:38:22.403107       1 serving.go:348] Generated self-signed cert in-memory
	W1121 14:38:24.313570       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1121 14:38:24.313613       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1121 14:38:24.313635       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1121 14:38:24.313646       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1121 14:38:24.343464       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1121 14:38:24.343521       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:38:24.344809       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:38:24.344852       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1121 14:38:24.345845       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1121 14:38:24.345910       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1121 14:38:24.445708       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 21 14:38:26 old-k8s-version-794941 kubelet[735]: E1121 14:38:26.221430     735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5c7fd9b1-424a-4401-932f-775af443b1b0-config-volume podName:5c7fd9b1-424a-4401-932f-775af443b1b0 nodeName:}" failed. No retries permitted until 2025-11-21 14:38:28.221405112 +0000 UTC m=+6.755683982 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5c7fd9b1-424a-4401-932f-775af443b1b0-config-volume") pod "coredns-5dd5756b68-h4xjd" (UID: "5c7fd9b1-424a-4401-932f-775af443b1b0") : object "kube-system"/"coredns" not registered
	Nov 21 14:38:26 old-k8s-version-794941 kubelet[735]: E1121 14:38:26.321761     735 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 21 14:38:26 old-k8s-version-794941 kubelet[735]: E1121 14:38:26.321789     735 projected.go:198] Error preparing data for projected volume kube-api-access-pgftz for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Nov 21 14:38:26 old-k8s-version-794941 kubelet[735]: E1121 14:38:26.321848     735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d07a0f79-8b73-4999-a3a1-654a71184bf3-kube-api-access-pgftz podName:d07a0f79-8b73-4999-a3a1-654a71184bf3 nodeName:}" failed. No retries permitted until 2025-11-21 14:38:28.321829426 +0000 UTC m=+6.856108291 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pgftz" (UniqueName: "kubernetes.io/projected/d07a0f79-8b73-4999-a3a1-654a71184bf3-kube-api-access-pgftz") pod "busybox" (UID: "d07a0f79-8b73-4999-a3a1-654a71184bf3") : object "default"/"kube-root-ca.crt" not registered
	Nov 21 14:38:35 old-k8s-version-794941 kubelet[735]: I1121 14:38:35.037224     735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 21 14:38:36 old-k8s-version-794941 kubelet[735]: I1121 14:38:36.551177     735 topology_manager.go:215] "Topology Admit Handler" podUID="876cea80-de57-4e49-bcb2-c83a9dddd295" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-lv25l"
	Nov 21 14:38:36 old-k8s-version-794941 kubelet[735]: I1121 14:38:36.553501     735 topology_manager.go:215] "Topology Admit Handler" podUID="4a9cd23b-d72e-4eca-a7e1-bb5b8600e591" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-27sz2"
	Nov 21 14:38:36 old-k8s-version-794941 kubelet[735]: I1121 14:38:36.675049     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb6sf\" (UniqueName: \"kubernetes.io/projected/876cea80-de57-4e49-bcb2-c83a9dddd295-kube-api-access-jb6sf\") pod \"kubernetes-dashboard-8694d4445c-lv25l\" (UID: \"876cea80-de57-4e49-bcb2-c83a9dddd295\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lv25l"
	Nov 21 14:38:36 old-k8s-version-794941 kubelet[735]: I1121 14:38:36.675108     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/876cea80-de57-4e49-bcb2-c83a9dddd295-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-lv25l\" (UID: \"876cea80-de57-4e49-bcb2-c83a9dddd295\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lv25l"
	Nov 21 14:38:36 old-k8s-version-794941 kubelet[735]: I1121 14:38:36.675202     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4a9cd23b-d72e-4eca-a7e1-bb5b8600e591-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-27sz2\" (UID: \"4a9cd23b-d72e-4eca-a7e1-bb5b8600e591\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2"
	Nov 21 14:38:36 old-k8s-version-794941 kubelet[735]: I1121 14:38:36.675235     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4jbk\" (UniqueName: \"kubernetes.io/projected/4a9cd23b-d72e-4eca-a7e1-bb5b8600e591-kube-api-access-k4jbk\") pod \"dashboard-metrics-scraper-5f989dc9cf-27sz2\" (UID: \"4a9cd23b-d72e-4eca-a7e1-bb5b8600e591\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2"
	Nov 21 14:38:41 old-k8s-version-794941 kubelet[735]: I1121 14:38:41.957273     735 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lv25l" podStartSLOduration=2.52386554 podCreationTimestamp="2025-11-21 14:38:36 +0000 UTC" firstStartedPulling="2025-11-21 14:38:36.876388239 +0000 UTC m=+15.410667116" lastFinishedPulling="2025-11-21 14:38:40.309733852 +0000 UTC m=+18.844012738" observedRunningTime="2025-11-21 14:38:40.622796927 +0000 UTC m=+19.157075807" watchObservedRunningTime="2025-11-21 14:38:41.957211162 +0000 UTC m=+20.491490050"
	Nov 21 14:38:42 old-k8s-version-794941 kubelet[735]: I1121 14:38:42.624210     735 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2" podStartSLOduration=0.936570685 podCreationTimestamp="2025-11-21 14:38:36 +0000 UTC" firstStartedPulling="2025-11-21 14:38:36.876614439 +0000 UTC m=+15.410893318" lastFinishedPulling="2025-11-21 14:38:42.564190642 +0000 UTC m=+21.098469510" observedRunningTime="2025-11-21 14:38:42.623436645 +0000 UTC m=+21.157715530" watchObservedRunningTime="2025-11-21 14:38:42.624146877 +0000 UTC m=+21.158425763"
	Nov 21 14:38:43 old-k8s-version-794941 kubelet[735]: I1121 14:38:43.619023     735 scope.go:117] "RemoveContainer" containerID="e3a314f603b1c71bd9fcfd77fa37834ce69ba249f39a060e2d3e2d715736b411"
	Nov 21 14:38:44 old-k8s-version-794941 kubelet[735]: I1121 14:38:44.622608     735 scope.go:117] "RemoveContainer" containerID="e3a314f603b1c71bd9fcfd77fa37834ce69ba249f39a060e2d3e2d715736b411"
	Nov 21 14:38:44 old-k8s-version-794941 kubelet[735]: I1121 14:38:44.622800     735 scope.go:117] "RemoveContainer" containerID="4d8b103d6a3b58640169648cc2590ab988fe248cbc99f9be97c587bb1abcbb50"
	Nov 21 14:38:44 old-k8s-version-794941 kubelet[735]: E1121 14:38:44.623193     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-27sz2_kubernetes-dashboard(4a9cd23b-d72e-4eca-a7e1-bb5b8600e591)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2" podUID="4a9cd23b-d72e-4eca-a7e1-bb5b8600e591"
	Nov 21 14:38:45 old-k8s-version-794941 kubelet[735]: I1121 14:38:45.625737     735 scope.go:117] "RemoveContainer" containerID="4d8b103d6a3b58640169648cc2590ab988fe248cbc99f9be97c587bb1abcbb50"
	Nov 21 14:38:45 old-k8s-version-794941 kubelet[735]: E1121 14:38:45.625975     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-27sz2_kubernetes-dashboard(4a9cd23b-d72e-4eca-a7e1-bb5b8600e591)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2" podUID="4a9cd23b-d72e-4eca-a7e1-bb5b8600e591"
	Nov 21 14:38:46 old-k8s-version-794941 kubelet[735]: I1121 14:38:46.855876     735 scope.go:117] "RemoveContainer" containerID="4d8b103d6a3b58640169648cc2590ab988fe248cbc99f9be97c587bb1abcbb50"
	Nov 21 14:38:46 old-k8s-version-794941 kubelet[735]: E1121 14:38:46.856117     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-27sz2_kubernetes-dashboard(4a9cd23b-d72e-4eca-a7e1-bb5b8600e591)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2" podUID="4a9cd23b-d72e-4eca-a7e1-bb5b8600e591"
	Nov 21 14:38:54 old-k8s-version-794941 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 14:38:54 old-k8s-version-794941 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 14:38:54 old-k8s-version-794941 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 21 14:38:54 old-k8s-version-794941 systemd[1]: kubelet.service: Consumed 1.031s CPU time.
	
	
	==> kubernetes-dashboard [2150a09e5bcb1bf08af2f5cf2b464ade235c8519f224e03da1a9c2e61df779e0] <==
	2025/11/21 14:38:40 Using namespace: kubernetes-dashboard
	2025/11/21 14:38:40 Using in-cluster config to connect to apiserver
	2025/11/21 14:38:40 Using secret token for csrf signing
	2025/11/21 14:38:40 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/21 14:38:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/21 14:38:40 Successful initial request to the apiserver, version: v1.28.0
	2025/11/21 14:38:40 Generating JWE encryption key
	2025/11/21 14:38:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/21 14:38:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/21 14:38:40 Initializing JWE encryption key from synchronized object
	2025/11/21 14:38:40 Creating in-cluster Sidecar client
	2025/11/21 14:38:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 14:38:40 Serving insecurely on HTTP port: 9090
	2025/11/21 14:38:40 Starting overwatch
	
	
	==> storage-provisioner [b23cc7e23724e9a3c16a0b5ddd305db2a7dbec8d3ac78fc1a54bc3bf1a179ad0] <==
	I1121 14:38:24.902974       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1121 14:38:54.905815       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-794941 -n old-k8s-version-794941
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-794941 -n old-k8s-version-794941: exit status 2 (324.89676ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-794941 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-794941
helpers_test.go:243: (dbg) docker inspect old-k8s-version-794941:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b81aa4f3bb48453daf4fe7a0508db9821cc8705861ed4feea1a5eeb7c75c5ce3",
	        "Created": "2025-11-21T14:37:02.714934052Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246179,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:38:15.345551484Z",
	            "FinishedAt": "2025-11-21T14:38:14.103904339Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/b81aa4f3bb48453daf4fe7a0508db9821cc8705861ed4feea1a5eeb7c75c5ce3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b81aa4f3bb48453daf4fe7a0508db9821cc8705861ed4feea1a5eeb7c75c5ce3/hostname",
	        "HostsPath": "/var/lib/docker/containers/b81aa4f3bb48453daf4fe7a0508db9821cc8705861ed4feea1a5eeb7c75c5ce3/hosts",
	        "LogPath": "/var/lib/docker/containers/b81aa4f3bb48453daf4fe7a0508db9821cc8705861ed4feea1a5eeb7c75c5ce3/b81aa4f3bb48453daf4fe7a0508db9821cc8705861ed4feea1a5eeb7c75c5ce3-json.log",
	        "Name": "/old-k8s-version-794941",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-794941:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-794941",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b81aa4f3bb48453daf4fe7a0508db9821cc8705861ed4feea1a5eeb7c75c5ce3",
	                "LowerDir": "/var/lib/docker/overlay2/c94fc1564c9f1d7c0d4997f74e9d5cf1f54d181439b877d5c418725371d7e094-init/diff:/var/lib/docker/overlay2/52a35d389e2d282a009a54c52e3f2c3f22a4d7ab5a2644a29d16127d21682576/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c94fc1564c9f1d7c0d4997f74e9d5cf1f54d181439b877d5c418725371d7e094/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c94fc1564c9f1d7c0d4997f74e9d5cf1f54d181439b877d5c418725371d7e094/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c94fc1564c9f1d7c0d4997f74e9d5cf1f54d181439b877d5c418725371d7e094/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-794941",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-794941/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-794941",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-794941",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-794941",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a36be8d68922de7cde45e5d70f4d4d43b036c98524f8ed7d783c5391efc2579b",
	            "SandboxKey": "/var/run/docker/netns/a36be8d68922",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-794941": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fcd2cf468008e08589bfa63705aa450680f6e45d22486fee930702c79b4654b7",
	                    "EndpointID": "40ace85739042613d2f4a3ddcc55102d29e0c2c0b01c5e175609b6dd4c2df9f5",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "46:55:6d:05:52:6a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-794941",
	                        "b81aa4f3bb48"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-794941 -n old-k8s-version-794941
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-794941 -n old-k8s-version-794941: exit status 2 (317.695156ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-794941 logs -n 25
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ node    │ multinode-384928 node start m03 -v=5 --alsologtostderr                                                                                                                                                                                        │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:28 UTC │
	│ node    │ list -p multinode-384928                                                                                                                                                                                                                      │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │                     │
	│ stop    │ -p multinode-384928                                                                                                                                                                                                                           │ multinode-384928         │ jenkins │ v1.37.0 │ 21 Nov 25 14:28 UTC │ 21 Nov 25 14:29 UTC │
	│ start   │ -p cert-expiration-046125 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-046125   │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ start   │ -p cert-options-116734 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ delete  │ -p force-systemd-env-653926                                                                                                                                                                                                                   │ force-systemd-env-653926 │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ start   │ -p pause-738756 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                                     │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:37 UTC │
	│ ssh     │ cert-options-116734 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ ssh     │ -p cert-options-116734 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ delete  │ -p cert-options-116734                                                                                                                                                                                                                        │ cert-options-116734      │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ start   │ -p old-k8s-version-794941 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:37 UTC │
	│ start   │ -p pause-738756 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:37 UTC │
	│ pause   │ -p pause-738756 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │                     │
	│ delete  │ -p pause-738756                                                                                                                                                                                                                               │ pause-738756             │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:37 UTC │
	│ start   │ -p no-preload-589411 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589411        │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:38 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-794941 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │                     │
	│ stop    │ -p old-k8s-version-794941 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:38 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-794941 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ start   │ -p old-k8s-version-794941 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ addons  │ enable metrics-server -p no-preload-589411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-589411        │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │                     │
	│ stop    │ -p no-preload-589411 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-589411        │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ image   │ old-k8s-version-794941 image list --format=json                                                                                                                                                                                               │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ pause   │ -p old-k8s-version-794941 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-794941   │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-589411 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-589411        │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ start   │ -p no-preload-589411 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589411        │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:38:55
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:38:55.420138  251689 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:38:55.420220  251689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:38:55.420228  251689 out.go:374] Setting ErrFile to fd 2...
	I1121 14:38:55.420232  251689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:38:55.420418  251689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:38:55.421024  251689 out.go:368] Setting JSON to false
	I1121 14:38:55.422594  251689 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4884,"bootTime":1763731051,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:38:55.422673  251689 start.go:143] virtualization: kvm guest
	I1121 14:38:55.424271  251689 out.go:179] * [no-preload-589411] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:38:55.425711  251689 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:38:55.425718  251689 notify.go:221] Checking for updates...
	I1121 14:38:55.427805  251689 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:38:55.428840  251689 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:38:55.429833  251689 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 14:38:55.430814  251689 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:38:55.431805  251689 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:38:55.433105  251689 config.go:182] Loaded profile config "no-preload-589411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:38:55.433535  251689 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:38:55.457103  251689 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:38:55.457185  251689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:38:55.516678  251689 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:38:55.506493662 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:38:55.516821  251689 docker.go:319] overlay module found
	I1121 14:38:55.518381  251689 out.go:179] * Using the docker driver based on existing profile
	I1121 14:38:55.519383  251689 start.go:309] selected driver: docker
	I1121 14:38:55.519398  251689 start.go:930] validating driver "docker" against &{Name:no-preload-589411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589411 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:38:55.519485  251689 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:38:55.520230  251689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:38:55.583998  251689 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:38:55.574184817 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:38:55.584314  251689 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:38:55.584348  251689 cni.go:84] Creating CNI manager for ""
	I1121 14:38:55.584407  251689 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:38:55.584491  251689 start.go:353] cluster config:
	{Name:no-preload-589411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:38:55.586040  251689 out.go:179] * Starting "no-preload-589411" primary control-plane node in "no-preload-589411" cluster
	I1121 14:38:55.587092  251689 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:38:55.588115  251689 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:38:55.589199  251689 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:38:55.589237  251689 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:38:55.589296  251689 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/no-preload-589411/config.json ...
	I1121 14:38:55.589441  251689 cache.go:107] acquiring lock: {Name:mke75466844e5b5d026463813774c1f728aaddeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:38:55.589441  251689 cache.go:107] acquiring lock: {Name:mkd98d9687b2082e3f3e88c7fade59999fdecf44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:38:55.589488  251689 cache.go:107] acquiring lock: {Name:mk44ffbe1b30798f442309f17630d5f372940d03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:38:55.589521  251689 cache.go:107] acquiring lock: {Name:mke34bf7a39c66927fe2657ec23445f04ebabbb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:38:55.589524  251689 cache.go:107] acquiring lock: {Name:mk16a0f56ae9b12023a6268ab9e2e14cd775531c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:38:55.589585  251689 cache.go:115] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1121 14:38:55.589527  251689 cache.go:107] acquiring lock: {Name:mkd4a76239f4b71fdf99ac5a759cd01897368f7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:38:55.589598  251689 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 130.206µs
	I1121 14:38:55.589571  251689 cache.go:107] acquiring lock: {Name:mk42daec646d706ae0683942a66a6acc7e89145d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:38:55.589608  251689 cache.go:115] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1121 14:38:55.589612  251689 cache.go:115] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1121 14:38:55.589623  251689 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 99.516µs
	I1121 14:38:55.589632  251689 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1121 14:38:55.589614  251689 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1121 14:38:55.589590  251689 cache.go:115] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1121 14:38:55.589613  251689 cache.go:107] acquiring lock: {Name:mkd096b3a3fa30971ac4cf9acc7857a7ffd9853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:38:55.589656  251689 cache.go:115] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1121 14:38:55.589665  251689 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 232.986µs
	I1121 14:38:55.589666  251689 cache.go:115] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1121 14:38:55.589673  251689 cache.go:115] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1121 14:38:55.589675  251689 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1121 14:38:55.589646  251689 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 127.144µs
	I1121 14:38:55.589682  251689 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 197.433µs
	I1121 14:38:55.589687  251689 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1121 14:38:55.589690  251689 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1121 14:38:55.589683  251689 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 129.75µs
	I1121 14:38:55.589699  251689 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1121 14:38:55.589624  251689 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 201.087µs
	I1121 14:38:55.589717  251689 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1121 14:38:55.589719  251689 cache.go:115] /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1121 14:38:55.589728  251689 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 177.595µs
	I1121 14:38:55.589749  251689 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21847-11045/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1121 14:38:55.589757  251689 cache.go:87] Successfully saved all images to host disk.
	I1121 14:38:55.609379  251689 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:38:55.609398  251689 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:38:55.609415  251689 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:38:55.609438  251689 start.go:360] acquireMachinesLock for no-preload-589411: {Name:mk828f66be6805be79eae119877f5f43d8b19d75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:38:55.609493  251689 start.go:364] duration metric: took 39.279µs to acquireMachinesLock for "no-preload-589411"
	I1121 14:38:55.609511  251689 start.go:96] Skipping create...Using existing machine configuration
	I1121 14:38:55.609520  251689 fix.go:54] fixHost starting: 
	I1121 14:38:55.609846  251689 cli_runner.go:164] Run: docker container inspect no-preload-589411 --format={{.State.Status}}
	I1121 14:38:55.625520  251689 fix.go:112] recreateIfNeeded on no-preload-589411: state=Stopped err=<nil>
	W1121 14:38:55.625543  251689 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 21 14:38:40 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:40.317096054Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1524033b1f1a2e5413e86bf8d9159ca760e109c78d3a6bf13999d99d76ca0b7d/merged/etc/group: no such file or directory"
	Nov 21 14:38:40 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:40.317496558Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:38:40 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:40.335645227Z" level=info msg="Created container 2150a09e5bcb1bf08af2f5cf2b464ade235c8519f224e03da1a9c2e61df779e0: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lv25l/kubernetes-dashboard" id=3da5e721-9cd1-46b6-ac42-bd513bbfa88c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:38:40 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:40.336273183Z" level=info msg="Starting container: 2150a09e5bcb1bf08af2f5cf2b464ade235c8519f224e03da1a9c2e61df779e0" id=b5adf999-aa33-4156-aacf-c8d7a5d2f910 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:38:40 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:40.338477713Z" level=info msg="Started container" PID=1508 containerID=2150a09e5bcb1bf08af2f5cf2b464ade235c8519f224e03da1a9c2e61df779e0 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lv25l/kubernetes-dashboard id=b5adf999-aa33-4156-aacf-c8d7a5d2f910 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1cfaa0204ff7c375a6c5e1335c73aa4bc0234738f1ad6ae3223b5ef7d26c090d
	Nov 21 14:38:42 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:42.563952354Z" level=info msg="Pulled image: registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=2c84f797-f53b-4b5e-a60f-cb5681d61b77 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:38:42 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:42.56468972Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=98e9f044-1189-4d94-9bde-52193b55f062 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:38:42 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:42.567205192Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2/dashboard-metrics-scraper" id=a4485f1c-95ad-47fd-a072-344b0dff7989 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:38:42 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:42.567328525Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:38:42 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:42.574919849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:38:42 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:42.575587739Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:38:42 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:42.595169317Z" level=info msg="Created container e3a314f603b1c71bd9fcfd77fa37834ce69ba249f39a060e2d3e2d715736b411: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2/dashboard-metrics-scraper" id=a4485f1c-95ad-47fd-a072-344b0dff7989 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:38:42 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:42.595628376Z" level=info msg="Starting container: e3a314f603b1c71bd9fcfd77fa37834ce69ba249f39a060e2d3e2d715736b411" id=4e434473-c505-4567-984f-4a7816f19aba name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:38:42 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:42.597169262Z" level=info msg="Started container" PID=1737 containerID=e3a314f603b1c71bd9fcfd77fa37834ce69ba249f39a060e2d3e2d715736b411 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2/dashboard-metrics-scraper id=4e434473-c505-4567-984f-4a7816f19aba name=/runtime.v1.RuntimeService/StartContainer sandboxID=e38761546fd4b20289aac33d339edd859e1e6d788210ea6c9aa7c1248fde6e82
	Nov 21 14:38:43 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:43.619493011Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=713ca127-5338-4c57-b347-c42f9995f6a6 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:38:43 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:43.622303068Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7d14b3ec-8532-4f13-aacc-b5e6f03fc1fc name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:38:43 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:43.625421379Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2/dashboard-metrics-scraper" id=73c06c58-df1e-4545-9ee8-c62c12a165d3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:38:43 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:43.625579009Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:38:43 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:43.63464829Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:38:43 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:43.635341323Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:38:43 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:43.670137431Z" level=info msg="Created container 4d8b103d6a3b58640169648cc2590ab988fe248cbc99f9be97c587bb1abcbb50: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2/dashboard-metrics-scraper" id=73c06c58-df1e-4545-9ee8-c62c12a165d3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:38:43 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:43.670682698Z" level=info msg="Starting container: 4d8b103d6a3b58640169648cc2590ab988fe248cbc99f9be97c587bb1abcbb50" id=3f5fdbdc-55df-404b-bd12-86c09012f432 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:38:43 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:43.672369079Z" level=info msg="Started container" PID=1761 containerID=4d8b103d6a3b58640169648cc2590ab988fe248cbc99f9be97c587bb1abcbb50 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2/dashboard-metrics-scraper id=3f5fdbdc-55df-404b-bd12-86c09012f432 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e38761546fd4b20289aac33d339edd859e1e6d788210ea6c9aa7c1248fde6e82
	Nov 21 14:38:44 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:44.624219254Z" level=info msg="Removing container: e3a314f603b1c71bd9fcfd77fa37834ce69ba249f39a060e2d3e2d715736b411" id=114c3f4e-2225-4584-965b-dcc838f5e880 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 14:38:44 old-k8s-version-794941 crio[567]: time="2025-11-21T14:38:44.637548474Z" level=info msg="Removed container e3a314f603b1c71bd9fcfd77fa37834ce69ba249f39a060e2d3e2d715736b411: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2/dashboard-metrics-scraper" id=114c3f4e-2225-4584-965b-dcc838f5e880 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	4d8b103d6a3b5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   1                   e38761546fd4b       dashboard-metrics-scraper-5f989dc9cf-27sz2       kubernetes-dashboard
	2150a09e5bcb1       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   18 seconds ago      Running             kubernetes-dashboard        0                   1cfaa0204ff7c       kubernetes-dashboard-8694d4445c-lv25l            kubernetes-dashboard
	4ab4be0d66cf2       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           29 seconds ago      Running             coredns                     0                   2d4641210135e       coredns-5dd5756b68-h4xjd                         kube-system
	597ff795a9610       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           29 seconds ago      Running             busybox                     1                   b6668d4d37337       busybox                                          default
	b23cc7e23724e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           33 seconds ago      Exited              storage-provisioner         0                   3246ec2c676c5       storage-provisioner                              kube-system
	1f61fa83329fe       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           33 seconds ago      Running             kube-proxy                  0                   cfc42244a6ab8       kube-proxy-w4rcg                                 kube-system
	3bd64c70ef61a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           33 seconds ago      Running             kindnet-cni                 0                   a078d24c3b03a       kindnet-9pjsf                                    kube-system
	fed082a62a98e       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           36 seconds ago      Running             kube-apiserver              0                   37122919829de       kube-apiserver-old-k8s-version-794941            kube-system
	17a449a6e8800       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           36 seconds ago      Running             kube-controller-manager     0                   2bf10298ea9b0       kube-controller-manager-old-k8s-version-794941   kube-system
	47db37c9bce4d       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           36 seconds ago      Running             kube-scheduler              0                   18040a6f08d7e       kube-scheduler-old-k8s-version-794941            kube-system
	28544b475c823       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           36 seconds ago      Running             etcd                        0                   645d9dd3979da       etcd-old-k8s-version-794941                      kube-system
	
	
	==> coredns [4ab4be0d66cf2f3b88b6055f3583c7696ad3a4d9514e845bd33fba43590dcd21] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51571 - 62285 "HINFO IN 1393753478946371822.3360243421546352995. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.471916445s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-794941
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-794941
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=old-k8s-version-794941
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_37_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:37:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-794941
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:38:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:38:34 +0000   Fri, 21 Nov 2025 14:37:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:38:34 +0000   Fri, 21 Nov 2025 14:37:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:38:34 +0000   Fri, 21 Nov 2025 14:37:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:38:34 +0000   Fri, 21 Nov 2025 14:38:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-794941
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                7ff44db4-ba6f-408c-b662-b0a6f3e0bc74
	  Boot ID:                    4deed74e-d1ae-403f-b303-7338c681df31
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 coredns-5dd5756b68-h4xjd                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     86s
	  kube-system                 etcd-old-k8s-version-794941                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         99s
	  kube-system                 kindnet-9pjsf                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      86s
	  kube-system                 kube-apiserver-old-k8s-version-794941             250m (3%)     0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-controller-manager-old-k8s-version-794941    200m (2%)     0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-proxy-w4rcg                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-scheduler-old-k8s-version-794941             100m (1%)     0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-27sz2        0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-lv25l             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 85s                kube-proxy       
	  Normal  Starting                 33s                kube-proxy       
	  Normal  NodeHasSufficientMemory  99s                kubelet          Node old-k8s-version-794941 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                kubelet          Node old-k8s-version-794941 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                kubelet          Node old-k8s-version-794941 status is now: NodeHasSufficientPID
	  Normal  Starting                 99s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           87s                node-controller  Node old-k8s-version-794941 event: Registered Node old-k8s-version-794941 in Controller
	  Normal  NodeReady                73s                kubelet          Node old-k8s-version-794941 status is now: NodeReady
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node old-k8s-version-794941 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node old-k8s-version-794941 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x8 over 37s)  kubelet          Node old-k8s-version-794941 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22s                node-controller  Node old-k8s-version-794941 event: Registered Node old-k8s-version-794941 in Controller
	
	
	==> dmesg <==
	[  +0.087005] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024870] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.407014] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 13:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.052324] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023964] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +2.047715] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +4.031557] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +8.511113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[ +16.382337] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[Nov21 13:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	
	
	==> etcd [28544b475c8231677d3576e4a5811ded27f16bd71f8c79e10c8f00528a254273] <==
	{"level":"info","ts":"2025-11-21T14:38:22.095621Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-21T14:38:22.095634Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-21T14:38:22.095945Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-11-21T14:38:22.096039Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-11-21T14:38:22.096156Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:38:22.096192Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-21T14:38:22.098416Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-21T14:38:22.099777Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-21T14:38:22.099827Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-21T14:38:22.099954Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-21T14:38:22.100021Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-21T14:38:23.38905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-21T14:38:23.389108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-21T14:38:23.389136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-21T14:38:23.38915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-11-21T14:38:23.389155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-11-21T14:38:23.389163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-11-21T14:38:23.389173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-11-21T14:38:23.391023Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-21T14:38:23.391037Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-21T14:38:23.391023Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-794941 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-21T14:38:23.391298Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-21T14:38:23.391333Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-21T14:38:23.39214Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-21T14:38:23.392236Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 14:38:58 up  1:21,  0 user,  load average: 1.97, 2.24, 1.57
	Linux old-k8s-version-794941 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3bd64c70ef61aa1b66639aa8514d2672609ec76928e1cb2edc4b87e7935f0879] <==
	I1121 14:38:25.157543       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:38:25.157860       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1121 14:38:25.157977       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:38:25.157993       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:38:25.158011       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:38:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:38:25.384458       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:38:25.384494       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:38:25.384524       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:38:25.384713       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:38:25.785703       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:38:25.788059       1 metrics.go:72] Registering metrics
	I1121 14:38:25.788434       1 controller.go:711] "Syncing nftables rules"
	I1121 14:38:35.359478       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1121 14:38:35.359543       1 main.go:301] handling current node
	I1121 14:38:45.360034       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1121 14:38:45.360063       1 main.go:301] handling current node
	I1121 14:38:55.365288       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1121 14:38:55.365330       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fed082a62a98e6cf91511c92c50665b154e1cc4e2eb218f55bee3856ca0f4a01] <==
	I1121 14:38:24.388333       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1121 14:38:24.388345       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1121 14:38:24.388501       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1121 14:38:24.388542       1 aggregator.go:166] initial CRD sync complete...
	I1121 14:38:24.388553       1 autoregister_controller.go:141] Starting autoregister controller
	I1121 14:38:24.388581       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:38:24.388589       1 cache.go:39] Caches are synced for autoregister controller
	I1121 14:38:24.388352       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1121 14:38:24.388355       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1121 14:38:24.388375       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1121 14:38:24.388391       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1121 14:38:24.388392       1 shared_informer.go:318] Caches are synced for configmaps
	I1121 14:38:24.433772       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:38:25.154060       1 controller.go:624] quota admission added evaluator for: namespaces
	I1121 14:38:25.184824       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1121 14:38:25.201194       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:38:25.207275       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:38:25.215892       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1121 14:38:25.247963       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.184.28"}
	I1121 14:38:25.261170       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.130.235"}
	I1121 14:38:25.288784       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:38:36.523127       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1121 14:38:36.538643       1 controller.go:624] quota admission added evaluator for: endpoints
	I1121 14:38:36.581138       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:38:36.581140       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [17a449a6e88003887696d16299028555a4ecfd3f1608112d0eea538b806b73b5] <==
	I1121 14:38:36.545437       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-lv25l"
	I1121 14:38:36.547228       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-27sz2"
	I1121 14:38:36.554069       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="24.330902ms"
	I1121 14:38:36.554287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="23.424967ms"
	I1121 14:38:36.559639       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.431509ms"
	I1121 14:38:36.559722       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.346µs"
	I1121 14:38:36.561027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="6.70214ms"
	I1121 14:38:36.561118       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.242µs"
	I1121 14:38:36.561896       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="36.777µs"
	I1121 14:38:36.575099       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.52µs"
	I1121 14:38:36.617331       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1121 14:38:36.635356       1 shared_informer.go:318] Caches are synced for disruption
	I1121 14:38:36.656479       1 shared_informer.go:318] Caches are synced for resource quota
	I1121 14:38:36.740511       1 shared_informer.go:318] Caches are synced for resource quota
	I1121 14:38:37.060839       1 shared_informer.go:318] Caches are synced for garbage collector
	I1121 14:38:37.061931       1 shared_informer.go:318] Caches are synced for garbage collector
	I1121 14:38:37.061959       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1121 14:38:40.629269       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.38502ms"
	I1121 14:38:40.629935       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="83.64µs"
	I1121 14:38:42.628736       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="5.019742ms"
	I1121 14:38:42.628829       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="48.582µs"
	I1121 14:38:43.636435       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="6.224029ms"
	I1121 14:38:43.636740       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.839µs"
	I1121 14:38:44.632614       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.404µs"
	I1121 14:38:45.634096       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.035µs"
	
	
	==> kube-proxy [1f61fa83329fef694908c75e518a0913b06691afd9040978f849e56c23e5e16d] <==
	I1121 14:38:24.943244       1 server_others.go:69] "Using iptables proxy"
	I1121 14:38:24.954405       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1121 14:38:24.973995       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:38:24.978345       1 server_others.go:152] "Using iptables Proxier"
	I1121 14:38:24.978377       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1121 14:38:24.978385       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1121 14:38:24.978407       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1121 14:38:24.978660       1 server.go:846] "Version info" version="v1.28.0"
	I1121 14:38:24.978677       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:38:24.979289       1 config.go:188] "Starting service config controller"
	I1121 14:38:24.979321       1 config.go:97] "Starting endpoint slice config controller"
	I1121 14:38:24.979327       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1121 14:38:24.979331       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1121 14:38:24.979352       1 config.go:315] "Starting node config controller"
	I1121 14:38:24.979380       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1121 14:38:25.080049       1 shared_informer.go:318] Caches are synced for service config
	I1121 14:38:25.080077       1 shared_informer.go:318] Caches are synced for node config
	I1121 14:38:25.080062       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [47db37c9bce4dd61dd2ba64e670f76cd77726767ef4c3182af1d5a94ede3419e] <==
	I1121 14:38:22.403107       1 serving.go:348] Generated self-signed cert in-memory
	W1121 14:38:24.313570       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1121 14:38:24.313613       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1121 14:38:24.313635       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1121 14:38:24.313646       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1121 14:38:24.343464       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1121 14:38:24.343521       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:38:24.344809       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:38:24.344852       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1121 14:38:24.345845       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1121 14:38:24.345910       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1121 14:38:24.445708       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 21 14:38:26 old-k8s-version-794941 kubelet[735]: E1121 14:38:26.221430     735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5c7fd9b1-424a-4401-932f-775af443b1b0-config-volume podName:5c7fd9b1-424a-4401-932f-775af443b1b0 nodeName:}" failed. No retries permitted until 2025-11-21 14:38:28.221405112 +0000 UTC m=+6.755683982 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5c7fd9b1-424a-4401-932f-775af443b1b0-config-volume") pod "coredns-5dd5756b68-h4xjd" (UID: "5c7fd9b1-424a-4401-932f-775af443b1b0") : object "kube-system"/"coredns" not registered
	Nov 21 14:38:26 old-k8s-version-794941 kubelet[735]: E1121 14:38:26.321761     735 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 21 14:38:26 old-k8s-version-794941 kubelet[735]: E1121 14:38:26.321789     735 projected.go:198] Error preparing data for projected volume kube-api-access-pgftz for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Nov 21 14:38:26 old-k8s-version-794941 kubelet[735]: E1121 14:38:26.321848     735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d07a0f79-8b73-4999-a3a1-654a71184bf3-kube-api-access-pgftz podName:d07a0f79-8b73-4999-a3a1-654a71184bf3 nodeName:}" failed. No retries permitted until 2025-11-21 14:38:28.321829426 +0000 UTC m=+6.856108291 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pgftz" (UniqueName: "kubernetes.io/projected/d07a0f79-8b73-4999-a3a1-654a71184bf3-kube-api-access-pgftz") pod "busybox" (UID: "d07a0f79-8b73-4999-a3a1-654a71184bf3") : object "default"/"kube-root-ca.crt" not registered
	Nov 21 14:38:35 old-k8s-version-794941 kubelet[735]: I1121 14:38:35.037224     735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 21 14:38:36 old-k8s-version-794941 kubelet[735]: I1121 14:38:36.551177     735 topology_manager.go:215] "Topology Admit Handler" podUID="876cea80-de57-4e49-bcb2-c83a9dddd295" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-lv25l"
	Nov 21 14:38:36 old-k8s-version-794941 kubelet[735]: I1121 14:38:36.553501     735 topology_manager.go:215] "Topology Admit Handler" podUID="4a9cd23b-d72e-4eca-a7e1-bb5b8600e591" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-27sz2"
	Nov 21 14:38:36 old-k8s-version-794941 kubelet[735]: I1121 14:38:36.675049     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb6sf\" (UniqueName: \"kubernetes.io/projected/876cea80-de57-4e49-bcb2-c83a9dddd295-kube-api-access-jb6sf\") pod \"kubernetes-dashboard-8694d4445c-lv25l\" (UID: \"876cea80-de57-4e49-bcb2-c83a9dddd295\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lv25l"
	Nov 21 14:38:36 old-k8s-version-794941 kubelet[735]: I1121 14:38:36.675108     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/876cea80-de57-4e49-bcb2-c83a9dddd295-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-lv25l\" (UID: \"876cea80-de57-4e49-bcb2-c83a9dddd295\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lv25l"
	Nov 21 14:38:36 old-k8s-version-794941 kubelet[735]: I1121 14:38:36.675202     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4a9cd23b-d72e-4eca-a7e1-bb5b8600e591-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-27sz2\" (UID: \"4a9cd23b-d72e-4eca-a7e1-bb5b8600e591\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2"
	Nov 21 14:38:36 old-k8s-version-794941 kubelet[735]: I1121 14:38:36.675235     735 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4jbk\" (UniqueName: \"kubernetes.io/projected/4a9cd23b-d72e-4eca-a7e1-bb5b8600e591-kube-api-access-k4jbk\") pod \"dashboard-metrics-scraper-5f989dc9cf-27sz2\" (UID: \"4a9cd23b-d72e-4eca-a7e1-bb5b8600e591\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2"
	Nov 21 14:38:41 old-k8s-version-794941 kubelet[735]: I1121 14:38:41.957273     735 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lv25l" podStartSLOduration=2.52386554 podCreationTimestamp="2025-11-21 14:38:36 +0000 UTC" firstStartedPulling="2025-11-21 14:38:36.876388239 +0000 UTC m=+15.410667116" lastFinishedPulling="2025-11-21 14:38:40.309733852 +0000 UTC m=+18.844012738" observedRunningTime="2025-11-21 14:38:40.622796927 +0000 UTC m=+19.157075807" watchObservedRunningTime="2025-11-21 14:38:41.957211162 +0000 UTC m=+20.491490050"
	Nov 21 14:38:42 old-k8s-version-794941 kubelet[735]: I1121 14:38:42.624210     735 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2" podStartSLOduration=0.936570685 podCreationTimestamp="2025-11-21 14:38:36 +0000 UTC" firstStartedPulling="2025-11-21 14:38:36.876614439 +0000 UTC m=+15.410893318" lastFinishedPulling="2025-11-21 14:38:42.564190642 +0000 UTC m=+21.098469510" observedRunningTime="2025-11-21 14:38:42.623436645 +0000 UTC m=+21.157715530" watchObservedRunningTime="2025-11-21 14:38:42.624146877 +0000 UTC m=+21.158425763"
	Nov 21 14:38:43 old-k8s-version-794941 kubelet[735]: I1121 14:38:43.619023     735 scope.go:117] "RemoveContainer" containerID="e3a314f603b1c71bd9fcfd77fa37834ce69ba249f39a060e2d3e2d715736b411"
	Nov 21 14:38:44 old-k8s-version-794941 kubelet[735]: I1121 14:38:44.622608     735 scope.go:117] "RemoveContainer" containerID="e3a314f603b1c71bd9fcfd77fa37834ce69ba249f39a060e2d3e2d715736b411"
	Nov 21 14:38:44 old-k8s-version-794941 kubelet[735]: I1121 14:38:44.622800     735 scope.go:117] "RemoveContainer" containerID="4d8b103d6a3b58640169648cc2590ab988fe248cbc99f9be97c587bb1abcbb50"
	Nov 21 14:38:44 old-k8s-version-794941 kubelet[735]: E1121 14:38:44.623193     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-27sz2_kubernetes-dashboard(4a9cd23b-d72e-4eca-a7e1-bb5b8600e591)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2" podUID="4a9cd23b-d72e-4eca-a7e1-bb5b8600e591"
	Nov 21 14:38:45 old-k8s-version-794941 kubelet[735]: I1121 14:38:45.625737     735 scope.go:117] "RemoveContainer" containerID="4d8b103d6a3b58640169648cc2590ab988fe248cbc99f9be97c587bb1abcbb50"
	Nov 21 14:38:45 old-k8s-version-794941 kubelet[735]: E1121 14:38:45.625975     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-27sz2_kubernetes-dashboard(4a9cd23b-d72e-4eca-a7e1-bb5b8600e591)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2" podUID="4a9cd23b-d72e-4eca-a7e1-bb5b8600e591"
	Nov 21 14:38:46 old-k8s-version-794941 kubelet[735]: I1121 14:38:46.855876     735 scope.go:117] "RemoveContainer" containerID="4d8b103d6a3b58640169648cc2590ab988fe248cbc99f9be97c587bb1abcbb50"
	Nov 21 14:38:46 old-k8s-version-794941 kubelet[735]: E1121 14:38:46.856117     735 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-27sz2_kubernetes-dashboard(4a9cd23b-d72e-4eca-a7e1-bb5b8600e591)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-27sz2" podUID="4a9cd23b-d72e-4eca-a7e1-bb5b8600e591"
	Nov 21 14:38:54 old-k8s-version-794941 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 14:38:54 old-k8s-version-794941 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 14:38:54 old-k8s-version-794941 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 21 14:38:54 old-k8s-version-794941 systemd[1]: kubelet.service: Consumed 1.031s CPU time.
	
	
	==> kubernetes-dashboard [2150a09e5bcb1bf08af2f5cf2b464ade235c8519f224e03da1a9c2e61df779e0] <==
	2025/11/21 14:38:40 Using namespace: kubernetes-dashboard
	2025/11/21 14:38:40 Using in-cluster config to connect to apiserver
	2025/11/21 14:38:40 Using secret token for csrf signing
	2025/11/21 14:38:40 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/21 14:38:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/21 14:38:40 Successful initial request to the apiserver, version: v1.28.0
	2025/11/21 14:38:40 Generating JWE encryption key
	2025/11/21 14:38:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/21 14:38:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/21 14:38:40 Initializing JWE encryption key from synchronized object
	2025/11/21 14:38:40 Creating in-cluster Sidecar client
	2025/11/21 14:38:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 14:38:40 Serving insecurely on HTTP port: 9090
	2025/11/21 14:38:40 Starting overwatch
	
	
	==> storage-provisioner [b23cc7e23724e9a3c16a0b5ddd305db2a7dbec8d3ac78fc1a54bc3bf1a179ad0] <==
	I1121 14:38:24.902974       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1121 14:38:54.905815       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-794941 -n old-k8s-version-794941
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-794941 -n old-k8s-version-794941: exit status 2 (312.00291ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-794941 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-441390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-441390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (250.839429ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:39:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-441390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-441390 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-441390 describe deploy/metrics-server -n kube-system: exit status 1 (60.144034ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-441390 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-441390
helpers_test.go:243: (dbg) docker inspect embed-certs-441390:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0ce231a2efd9649e52b4171740e54a8824b68b993c22eaf04e41cdad5d399c78",
	        "Created": "2025-11-21T14:39:07.796898766Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 256627,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:39:07.827812278Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/0ce231a2efd9649e52b4171740e54a8824b68b993c22eaf04e41cdad5d399c78/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0ce231a2efd9649e52b4171740e54a8824b68b993c22eaf04e41cdad5d399c78/hostname",
	        "HostsPath": "/var/lib/docker/containers/0ce231a2efd9649e52b4171740e54a8824b68b993c22eaf04e41cdad5d399c78/hosts",
	        "LogPath": "/var/lib/docker/containers/0ce231a2efd9649e52b4171740e54a8824b68b993c22eaf04e41cdad5d399c78/0ce231a2efd9649e52b4171740e54a8824b68b993c22eaf04e41cdad5d399c78-json.log",
	        "Name": "/embed-certs-441390",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-441390:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-441390",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0ce231a2efd9649e52b4171740e54a8824b68b993c22eaf04e41cdad5d399c78",
	                "LowerDir": "/var/lib/docker/overlay2/600fd769bdab16b7cfa0c469ccebb67ba68133c5b4bce708cd3a08511bd496b4-init/diff:/var/lib/docker/overlay2/52a35d389e2d282a009a54c52e3f2c3f22a4d7ab5a2644a29d16127d21682576/diff",
	                "MergedDir": "/var/lib/docker/overlay2/600fd769bdab16b7cfa0c469ccebb67ba68133c5b4bce708cd3a08511bd496b4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/600fd769bdab16b7cfa0c469ccebb67ba68133c5b4bce708cd3a08511bd496b4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/600fd769bdab16b7cfa0c469ccebb67ba68133c5b4bce708cd3a08511bd496b4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-441390",
	                "Source": "/var/lib/docker/volumes/embed-certs-441390/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-441390",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-441390",
	                "name.minikube.sigs.k8s.io": "embed-certs-441390",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f7d1731bdcc5f181b46ad27ac14078a0aa62e842631787459677df25b0cfdcc3",
	            "SandboxKey": "/var/run/docker/netns/f7d1731bdcc5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-441390": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e6dc762b4b87807c44de5ce5e6dedcc7963047110765e9594324098021783415",
	                    "EndpointID": "44455f8e143efc77e3943c55f3179e0d49fcc1cc50d0fb8b489123ec0b06d6bf",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "6e:c9:d4:25:2d:0b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-441390",
	                        "0ce231a2efd9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-441390 -n embed-certs-441390
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-441390 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-441390 logs -n 25: (1.028025753s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p pause-738756 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                                     │ pause-738756              │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:37 UTC │
	│ ssh     │ cert-options-116734 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-116734       │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ ssh     │ -p cert-options-116734 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-116734       │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ delete  │ -p cert-options-116734                                                                                                                                                                                                                        │ cert-options-116734       │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ start   │ -p old-k8s-version-794941 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:37 UTC │
	│ start   │ -p pause-738756 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-738756              │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:37 UTC │
	│ pause   │ -p pause-738756 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-738756              │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │                     │
	│ delete  │ -p pause-738756                                                                                                                                                                                                                               │ pause-738756              │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:37 UTC │
	│ start   │ -p no-preload-589411 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589411         │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:38 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-794941 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │                     │
	│ stop    │ -p old-k8s-version-794941 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:38 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-794941 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ start   │ -p old-k8s-version-794941 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ addons  │ enable metrics-server -p no-preload-589411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-589411         │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │                     │
	│ stop    │ -p no-preload-589411 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-589411         │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ image   │ old-k8s-version-794941 image list --format=json                                                                                                                                                                                               │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ pause   │ -p old-k8s-version-794941 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-589411 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-589411         │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ start   │ -p no-preload-589411 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589411         │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:39 UTC │
	│ delete  │ -p old-k8s-version-794941                                                                                                                                                                                                                     │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:39 UTC │
	│ delete  │ -p old-k8s-version-794941                                                                                                                                                                                                                     │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:39 UTC │
	│ start   │ -p embed-certs-441390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-441390        │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:39 UTC │
	│ start   │ -p kubernetes-upgrade-214044 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-214044 │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ start   │ -p kubernetes-upgrade-214044 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-214044 │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-441390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-441390        │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:39:52
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:39:52.496919  261994 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:39:52.497049  261994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:39:52.497060  261994 out.go:374] Setting ErrFile to fd 2...
	I1121 14:39:52.497068  261994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:39:52.497253  261994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:39:52.497811  261994 out.go:368] Setting JSON to false
	I1121 14:39:52.499018  261994 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4941,"bootTime":1763731051,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:39:52.499071  261994 start.go:143] virtualization: kvm guest
	I1121 14:39:52.500413  261994 out.go:179] * [kubernetes-upgrade-214044] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:39:52.501850  261994 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:39:52.501926  261994 notify.go:221] Checking for updates...
	I1121 14:39:52.504177  261994 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:39:52.505422  261994 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:39:52.506547  261994 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 14:39:52.507529  261994 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:39:52.508630  261994 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:39:52.510015  261994 config.go:182] Loaded profile config "kubernetes-upgrade-214044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:39:52.510423  261994 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:39:52.537118  261994 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:39:52.537204  261994 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:39:52.596709  261994 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-21 14:39:52.587095337 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:39:52.596802  261994 docker.go:319] overlay module found
	I1121 14:39:52.598329  261994 out.go:179] * Using the docker driver based on existing profile
	I1121 14:39:52.599329  261994 start.go:309] selected driver: docker
	I1121 14:39:52.599349  261994 start.go:930] validating driver "docker" against &{Name:kubernetes-upgrade-214044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-214044 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:39:52.599455  261994 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:39:52.600254  261994 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:39:52.665926  261994 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-21 14:39:52.655707888 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:39:52.666306  261994 cni.go:84] Creating CNI manager for ""
	I1121 14:39:52.666380  261994 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:39:52.666485  261994 start.go:353] cluster config:
	{Name:kubernetes-upgrade-214044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-214044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:39:52.668839  261994 out.go:179] * Starting "kubernetes-upgrade-214044" primary control-plane node in "kubernetes-upgrade-214044" cluster
	I1121 14:39:52.670018  261994 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:39:52.671151  261994 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:39:52.672508  261994 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:39:52.672544  261994 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1121 14:39:52.672573  261994 cache.go:65] Caching tarball of preloaded images
	I1121 14:39:52.672624  261994 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:39:52.672694  261994 preload.go:238] Found /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1121 14:39:52.672711  261994 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 14:39:52.672836  261994 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/kubernetes-upgrade-214044/config.json ...
	I1121 14:39:52.695148  261994 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:39:52.695169  261994 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:39:52.695189  261994 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:39:52.695224  261994 start.go:360] acquireMachinesLock for kubernetes-upgrade-214044: {Name:mk6cde14689d14fd2f260e673006d87e7504059f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:39:52.695285  261994 start.go:364] duration metric: took 38.177µs to acquireMachinesLock for "kubernetes-upgrade-214044"
	I1121 14:39:52.695305  261994 start.go:96] Skipping create...Using existing machine configuration
	I1121 14:39:52.695313  261994 fix.go:54] fixHost starting: 
	I1121 14:39:52.695625  261994 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-214044 --format={{.State.Status}}
	I1121 14:39:52.712428  261994 fix.go:112] recreateIfNeeded on kubernetes-upgrade-214044: state=Running err=<nil>
	W1121 14:39:52.712452  261994 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 21 14:39:41 embed-certs-441390 crio[770]: time="2025-11-21T14:39:41.584334037Z" level=info msg="Starting container: a8392954cdae1ca17cfef5d4d441e7d723d3ef0ff69019d164742a5386c32f76" id=bb704b0c-28f3-4e10-bbab-c6b18c374433 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:39:41 embed-certs-441390 crio[770]: time="2025-11-21T14:39:41.586317695Z" level=info msg="Started container" PID=1851 containerID=a8392954cdae1ca17cfef5d4d441e7d723d3ef0ff69019d164742a5386c32f76 description=kube-system/coredns-66bc5c9577-sbjhs/coredns id=bb704b0c-28f3-4e10-bbab-c6b18c374433 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0c0a6959ac0c3b5597643d510bdc6febcf90762697815980c117b8aa57944000
	Nov 21 14:39:44 embed-certs-441390 crio[770]: time="2025-11-21T14:39:44.778206844Z" level=info msg="Running pod sandbox: default/busybox/POD" id=e8f07cb7-dbf1-4afe-a678-29a058bebe6b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:39:44 embed-certs-441390 crio[770]: time="2025-11-21T14:39:44.778268495Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:39:44 embed-certs-441390 crio[770]: time="2025-11-21T14:39:44.784100688Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fcc869bccf47cb78b1a8c6b9fb4ae7d1dfde7940fede5728d1661ace65915037 UID:e3e88ed6-52f6-4e30-97ba-30031a549261 NetNS:/var/run/netns/cc528d1b-f182-4a1e-930b-94e0b788b495 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d12020}] Aliases:map[]}"
	Nov 21 14:39:44 embed-certs-441390 crio[770]: time="2025-11-21T14:39:44.784168519Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 21 14:39:44 embed-certs-441390 crio[770]: time="2025-11-21T14:39:44.792673757Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fcc869bccf47cb78b1a8c6b9fb4ae7d1dfde7940fede5728d1661ace65915037 UID:e3e88ed6-52f6-4e30-97ba-30031a549261 NetNS:/var/run/netns/cc528d1b-f182-4a1e-930b-94e0b788b495 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d12020}] Aliases:map[]}"
	Nov 21 14:39:44 embed-certs-441390 crio[770]: time="2025-11-21T14:39:44.79280921Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 21 14:39:44 embed-certs-441390 crio[770]: time="2025-11-21T14:39:44.793423661Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 21 14:39:44 embed-certs-441390 crio[770]: time="2025-11-21T14:39:44.794201725Z" level=info msg="Ran pod sandbox fcc869bccf47cb78b1a8c6b9fb4ae7d1dfde7940fede5728d1661ace65915037 with infra container: default/busybox/POD" id=e8f07cb7-dbf1-4afe-a678-29a058bebe6b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:39:44 embed-certs-441390 crio[770]: time="2025-11-21T14:39:44.795202026Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cc2dc0fb-1d81-4726-8175-a7065adc08f1 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:39:44 embed-certs-441390 crio[770]: time="2025-11-21T14:39:44.795299785Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=cc2dc0fb-1d81-4726-8175-a7065adc08f1 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:39:44 embed-certs-441390 crio[770]: time="2025-11-21T14:39:44.795328697Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=cc2dc0fb-1d81-4726-8175-a7065adc08f1 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:39:44 embed-certs-441390 crio[770]: time="2025-11-21T14:39:44.796031619Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2f04b4b9-e409-41d7-832f-43684e7e2675 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:39:44 embed-certs-441390 crio[770]: time="2025-11-21T14:39:44.79932189Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:39:45 embed-certs-441390 crio[770]: time="2025-11-21T14:39:45.680680074Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=2f04b4b9-e409-41d7-832f-43684e7e2675 name=/runtime.v1.ImageService/PullImage
	Nov 21 14:39:45 embed-certs-441390 crio[770]: time="2025-11-21T14:39:45.681302799Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3a08bb4a-7300-4789-a0aa-10f3c7b35e79 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:39:45 embed-certs-441390 crio[770]: time="2025-11-21T14:39:45.682554991Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fcb48330-ec28-4eb0-b6ad-7ab6f9aa9e5b name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:39:45 embed-certs-441390 crio[770]: time="2025-11-21T14:39:45.685685091Z" level=info msg="Creating container: default/busybox/busybox" id=997ef446-4cae-4ba8-b3d3-a00fa25b1968 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:39:45 embed-certs-441390 crio[770]: time="2025-11-21T14:39:45.685780772Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:39:45 embed-certs-441390 crio[770]: time="2025-11-21T14:39:45.688951609Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:39:45 embed-certs-441390 crio[770]: time="2025-11-21T14:39:45.689304928Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:39:45 embed-certs-441390 crio[770]: time="2025-11-21T14:39:45.72140397Z" level=info msg="Created container e4bd5a30cd4a9564d33e045727e2abda0bca0c513501b94e8709b553ad34a4d0: default/busybox/busybox" id=997ef446-4cae-4ba8-b3d3-a00fa25b1968 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:39:45 embed-certs-441390 crio[770]: time="2025-11-21T14:39:45.721848034Z" level=info msg="Starting container: e4bd5a30cd4a9564d33e045727e2abda0bca0c513501b94e8709b553ad34a4d0" id=1ac0eaf8-b7ba-4808-a35c-eb834834ed02 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:39:45 embed-certs-441390 crio[770]: time="2025-11-21T14:39:45.723595867Z" level=info msg="Started container" PID=1929 containerID=e4bd5a30cd4a9564d33e045727e2abda0bca0c513501b94e8709b553ad34a4d0 description=default/busybox/busybox id=1ac0eaf8-b7ba-4808-a35c-eb834834ed02 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fcc869bccf47cb78b1a8c6b9fb4ae7d1dfde7940fede5728d1661ace65915037
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	e4bd5a30cd4a9       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   fcc869bccf47c       busybox                                      default
	a8392954cdae1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   0c0a6959ac0c3       coredns-66bc5c9577-sbjhs                     kube-system
	7a270e96bf2f0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   0d65dd03697cb       storage-provisioner                          kube-system
	6359d56c84d74       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   3af9ec9c72dc8       kindnet-pg6qj                                kube-system
	76e53c562d504       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   54c18845c54a7       kube-proxy-m2nzt                             kube-system
	e8d463990c797       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   c2eb5bfc53ec1       kube-controller-manager-embed-certs-441390   kube-system
	66a12d02b909f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   48f76c75329ee       kube-apiserver-embed-certs-441390            kube-system
	f06643df44f73       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   0acbee42a39bd       kube-scheduler-embed-certs-441390            kube-system
	f60bedfad3dfb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   9b1743860b737       etcd-embed-certs-441390                      kube-system
	
	
	==> coredns [a8392954cdae1ca17cfef5d4d441e7d723d3ef0ff69019d164742a5386c32f76] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33576 - 47832 "HINFO IN 4421971669743937737.8467743882429533359. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.074974863s
	
	
	==> describe nodes <==
	Name:               embed-certs-441390
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-441390
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=embed-certs-441390
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_39_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:39:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-441390
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:39:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:39:41 +0000   Fri, 21 Nov 2025 14:39:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:39:41 +0000   Fri, 21 Nov 2025 14:39:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:39:41 +0000   Fri, 21 Nov 2025 14:39:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:39:41 +0000   Fri, 21 Nov 2025 14:39:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-441390
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                f6f8f703-6de7-4044-b431-06d9e8823119
	  Boot ID:                    4deed74e-d1ae-403f-b303-7338c681df31
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-sbjhs                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-embed-certs-441390                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-pg6qj                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-441390             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-embed-certs-441390    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-m2nzt                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-441390             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node embed-certs-441390 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node embed-certs-441390 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 34s)  kubelet          Node embed-certs-441390 status is now: NodeHasSufficientPID
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s                kubelet          Node embed-certs-441390 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s                kubelet          Node embed-certs-441390 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s                kubelet          Node embed-certs-441390 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node embed-certs-441390 event: Registered Node embed-certs-441390 in Controller
	  Normal  NodeReady                12s                kubelet          Node embed-certs-441390 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.087005] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024870] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.407014] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 13:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.052324] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023964] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +2.047715] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +4.031557] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +8.511113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[ +16.382337] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[Nov21 13:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	
	
	==> etcd [f60bedfad3dfb4745f2e2479bf95e30d4dce34d364bc3588ee5dfbea90c00716] <==
	{"level":"warn","ts":"2025-11-21T14:39:21.156032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.163882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.169587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.175379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.181399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.188077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.195104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.202205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.209184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.226736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.232476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.239417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.246423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.253352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.260130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.267009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.272950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.279062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.285530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.292505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.298864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.314807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.321646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.328260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:39:21.374989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60050","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:39:53 up  1:22,  0 user,  load average: 2.52, 2.43, 1.68
	Linux embed-certs-441390 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6359d56c84d741f9bcd357d27b499e91c546c55cae1373012847c73bee901f35] <==
	I1121 14:39:30.431806       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:39:30.432044       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1121 14:39:30.432191       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:39:30.432209       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:39:30.432235       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:39:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:39:30.789256       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:39:30.789286       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:39:30.789298       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:39:30.789424       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:39:31.089378       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:39:31.089403       1 metrics.go:72] Registering metrics
	I1121 14:39:31.089458       1 controller.go:711] "Syncing nftables rules"
	I1121 14:39:40.727806       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1121 14:39:40.727875       1 main.go:301] handling current node
	I1121 14:39:50.727903       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1121 14:39:50.727949       1 main.go:301] handling current node
	
	
	==> kube-apiserver [66a12d02b909fc97c3916b69f460ff4e15ef03b9f730998a0db49126c858c90c] <==
	I1121 14:39:21.828370       1 cache.go:39] Caches are synced for autoregister controller
	I1121 14:39:21.829739       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 14:39:21.830407       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:39:21.830757       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1121 14:39:21.836816       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:39:21.837244       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:39:21.855620       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:39:22.731673       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:39:22.735575       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:39:22.735594       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:39:23.166279       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:39:23.199915       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:39:23.334752       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:39:23.340089       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1121 14:39:23.340939       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:39:23.344456       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:39:23.756282       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:39:24.427621       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:39:24.434847       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:39:24.441848       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:39:29.411319       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:39:29.414648       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:39:29.807245       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:39:29.858818       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1121 14:39:52.572944       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:49104: use of closed network connection
	
	
	==> kube-controller-manager [e8d463990c7979947df659c7e12bbcdc4d2bd4df2b13ae7b382f1b0c5ef858b7] <==
	I1121 14:39:28.755600       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1121 14:39:28.755618       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 14:39:28.755639       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 14:39:28.756841       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 14:39:28.756877       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:39:28.756925       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 14:39:28.756951       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 14:39:28.757051       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:39:28.757107       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1121 14:39:28.757134       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 14:39:28.757527       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 14:39:28.758681       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 14:39:28.759047       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1121 14:39:28.759144       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1121 14:39:28.759200       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1121 14:39:28.759208       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1121 14:39:28.759215       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1121 14:39:28.759839       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 14:39:28.761004       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 14:39:28.764202       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:39:28.764821       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-441390" podCIDRs=["10.244.0.0/24"]
	I1121 14:39:28.765679       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1121 14:39:28.769893       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 14:39:28.780346       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:39:43.713156       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [76e53c562d5049411663d320497e22cbee1e25a9f95019cc09c1801d1e073342] <==
	I1121 14:39:30.298422       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:39:30.367856       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:39:30.468964       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:39:30.468996       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1121 14:39:30.469100       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:39:30.486749       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:39:30.486789       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:39:30.491870       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:39:30.492245       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:39:30.492270       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:39:30.493372       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:39:30.493391       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:39:30.493452       1 config.go:309] "Starting node config controller"
	I1121 14:39:30.493464       1 config.go:200] "Starting service config controller"
	I1121 14:39:30.493468       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:39:30.493475       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:39:30.493473       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:39:30.493494       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:39:30.593586       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:39:30.593620       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:39:30.593617       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:39:30.593631       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [f06643df44f73ece77b5b97a2b2f55a6f97dce3bc201d728ca9f1a686f577e0f] <==
	E1121 14:39:21.775825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:39:21.775843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:39:21.775899       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:39:21.775932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:39:21.776000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1121 14:39:21.776307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:39:21.776457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:39:21.776505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:39:21.776630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:39:21.776748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:39:21.776850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:39:21.776879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:39:21.776963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:39:21.777005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:39:21.777028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:39:21.777142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:39:22.608988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:39:22.610940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:39:22.673997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:39:22.730505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1121 14:39:22.803104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:39:22.956823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:39:22.998080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:39:23.018156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1121 14:39:25.474062       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:39:25 embed-certs-441390 kubelet[1321]: E1121 14:39:25.289144    1321 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-embed-certs-441390\" already exists" pod="kube-system/etcd-embed-certs-441390"
	Nov 21 14:39:25 embed-certs-441390 kubelet[1321]: I1121 14:39:25.311440    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-441390" podStartSLOduration=1.31142054 podStartE2EDuration="1.31142054s" podCreationTimestamp="2025-11-21 14:39:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:39:25.302529643 +0000 UTC m=+1.118323673" watchObservedRunningTime="2025-11-21 14:39:25.31142054 +0000 UTC m=+1.127214572"
	Nov 21 14:39:25 embed-certs-441390 kubelet[1321]: I1121 14:39:25.311606    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-441390" podStartSLOduration=1.311592185 podStartE2EDuration="1.311592185s" podCreationTimestamp="2025-11-21 14:39:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:39:25.311014736 +0000 UTC m=+1.126808766" watchObservedRunningTime="2025-11-21 14:39:25.311592185 +0000 UTC m=+1.127386216"
	Nov 21 14:39:25 embed-certs-441390 kubelet[1321]: I1121 14:39:25.320692    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-441390" podStartSLOduration=3.320675282 podStartE2EDuration="3.320675282s" podCreationTimestamp="2025-11-21 14:39:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:39:25.320620518 +0000 UTC m=+1.136414550" watchObservedRunningTime="2025-11-21 14:39:25.320675282 +0000 UTC m=+1.136469313"
	Nov 21 14:39:25 embed-certs-441390 kubelet[1321]: I1121 14:39:25.337886    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-441390" podStartSLOduration=1.337871239 podStartE2EDuration="1.337871239s" podCreationTimestamp="2025-11-21 14:39:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:39:25.328530227 +0000 UTC m=+1.144324257" watchObservedRunningTime="2025-11-21 14:39:25.337871239 +0000 UTC m=+1.153665269"
	Nov 21 14:39:28 embed-certs-441390 kubelet[1321]: I1121 14:39:28.768363    1321 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 21 14:39:28 embed-certs-441390 kubelet[1321]: I1121 14:39:28.769079    1321 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:39:29 embed-certs-441390 kubelet[1321]: I1121 14:39:29.888592    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/200232c6-7d1f-4ad2-acdf-473aa5ca42aa-cni-cfg\") pod \"kindnet-pg6qj\" (UID: \"200232c6-7d1f-4ad2-acdf-473aa5ca42aa\") " pod="kube-system/kindnet-pg6qj"
	Nov 21 14:39:29 embed-certs-441390 kubelet[1321]: I1121 14:39:29.888640    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/200232c6-7d1f-4ad2-acdf-473aa5ca42aa-xtables-lock\") pod \"kindnet-pg6qj\" (UID: \"200232c6-7d1f-4ad2-acdf-473aa5ca42aa\") " pod="kube-system/kindnet-pg6qj"
	Nov 21 14:39:29 embed-certs-441390 kubelet[1321]: I1121 14:39:29.888670    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50058869-6257-4b96-ab7b-53f1b6ebfa85-lib-modules\") pod \"kube-proxy-m2nzt\" (UID: \"50058869-6257-4b96-ab7b-53f1b6ebfa85\") " pod="kube-system/kube-proxy-m2nzt"
	Nov 21 14:39:29 embed-certs-441390 kubelet[1321]: I1121 14:39:29.888692    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/200232c6-7d1f-4ad2-acdf-473aa5ca42aa-lib-modules\") pod \"kindnet-pg6qj\" (UID: \"200232c6-7d1f-4ad2-acdf-473aa5ca42aa\") " pod="kube-system/kindnet-pg6qj"
	Nov 21 14:39:29 embed-certs-441390 kubelet[1321]: I1121 14:39:29.888718    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fkx6\" (UniqueName: \"kubernetes.io/projected/200232c6-7d1f-4ad2-acdf-473aa5ca42aa-kube-api-access-2fkx6\") pod \"kindnet-pg6qj\" (UID: \"200232c6-7d1f-4ad2-acdf-473aa5ca42aa\") " pod="kube-system/kindnet-pg6qj"
	Nov 21 14:39:29 embed-certs-441390 kubelet[1321]: I1121 14:39:29.888744    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/50058869-6257-4b96-ab7b-53f1b6ebfa85-kube-proxy\") pod \"kube-proxy-m2nzt\" (UID: \"50058869-6257-4b96-ab7b-53f1b6ebfa85\") " pod="kube-system/kube-proxy-m2nzt"
	Nov 21 14:39:29 embed-certs-441390 kubelet[1321]: I1121 14:39:29.888769    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50058869-6257-4b96-ab7b-53f1b6ebfa85-xtables-lock\") pod \"kube-proxy-m2nzt\" (UID: \"50058869-6257-4b96-ab7b-53f1b6ebfa85\") " pod="kube-system/kube-proxy-m2nzt"
	Nov 21 14:39:29 embed-certs-441390 kubelet[1321]: I1121 14:39:29.888791    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtdf4\" (UniqueName: \"kubernetes.io/projected/50058869-6257-4b96-ab7b-53f1b6ebfa85-kube-api-access-qtdf4\") pod \"kube-proxy-m2nzt\" (UID: \"50058869-6257-4b96-ab7b-53f1b6ebfa85\") " pod="kube-system/kube-proxy-m2nzt"
	Nov 21 14:39:30 embed-certs-441390 kubelet[1321]: I1121 14:39:30.312461    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-pg6qj" podStartSLOduration=1.312431558 podStartE2EDuration="1.312431558s" podCreationTimestamp="2025-11-21 14:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:39:30.302641468 +0000 UTC m=+6.118435503" watchObservedRunningTime="2025-11-21 14:39:30.312431558 +0000 UTC m=+6.128225588"
	Nov 21 14:39:32 embed-certs-441390 kubelet[1321]: I1121 14:39:32.769252    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m2nzt" podStartSLOduration=3.7692304979999998 podStartE2EDuration="3.769230498s" podCreationTimestamp="2025-11-21 14:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:39:30.312773673 +0000 UTC m=+6.128567701" watchObservedRunningTime="2025-11-21 14:39:32.769230498 +0000 UTC m=+8.585024527"
	Nov 21 14:39:41 embed-certs-441390 kubelet[1321]: I1121 14:39:41.190578    1321 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 21 14:39:41 embed-certs-441390 kubelet[1321]: I1121 14:39:41.259218    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c780507f-61e0-418f-9033-a7e40d5df9ab-config-volume\") pod \"coredns-66bc5c9577-sbjhs\" (UID: \"c780507f-61e0-418f-9033-a7e40d5df9ab\") " pod="kube-system/coredns-66bc5c9577-sbjhs"
	Nov 21 14:39:41 embed-certs-441390 kubelet[1321]: I1121 14:39:41.259272    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2fa17547-fba1-43c4-bb71-c384dd1036aa-tmp\") pod \"storage-provisioner\" (UID: \"2fa17547-fba1-43c4-bb71-c384dd1036aa\") " pod="kube-system/storage-provisioner"
	Nov 21 14:39:41 embed-certs-441390 kubelet[1321]: I1121 14:39:41.259302    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frnz8\" (UniqueName: \"kubernetes.io/projected/2fa17547-fba1-43c4-bb71-c384dd1036aa-kube-api-access-frnz8\") pod \"storage-provisioner\" (UID: \"2fa17547-fba1-43c4-bb71-c384dd1036aa\") " pod="kube-system/storage-provisioner"
	Nov 21 14:39:41 embed-certs-441390 kubelet[1321]: I1121 14:39:41.259422    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22xv9\" (UniqueName: \"kubernetes.io/projected/c780507f-61e0-418f-9033-a7e40d5df9ab-kube-api-access-22xv9\") pod \"coredns-66bc5c9577-sbjhs\" (UID: \"c780507f-61e0-418f-9033-a7e40d5df9ab\") " pod="kube-system/coredns-66bc5c9577-sbjhs"
	Nov 21 14:39:42 embed-certs-441390 kubelet[1321]: I1121 14:39:42.324696    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sbjhs" podStartSLOduration=13.324675238 podStartE2EDuration="13.324675238s" podCreationTimestamp="2025-11-21 14:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:39:42.324520693 +0000 UTC m=+18.140314915" watchObservedRunningTime="2025-11-21 14:39:42.324675238 +0000 UTC m=+18.140469267"
	Nov 21 14:39:42 embed-certs-441390 kubelet[1321]: I1121 14:39:42.332651    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.332633743 podStartE2EDuration="12.332633743s" podCreationTimestamp="2025-11-21 14:39:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:39:42.332333173 +0000 UTC m=+18.148127204" watchObservedRunningTime="2025-11-21 14:39:42.332633743 +0000 UTC m=+18.148427773"
	Nov 21 14:39:44 embed-certs-441390 kubelet[1321]: I1121 14:39:44.579316    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd9xd\" (UniqueName: \"kubernetes.io/projected/e3e88ed6-52f6-4e30-97ba-30031a549261-kube-api-access-cd9xd\") pod \"busybox\" (UID: \"e3e88ed6-52f6-4e30-97ba-30031a549261\") " pod="default/busybox"
	
	
	==> storage-provisioner [7a270e96bf2f06c34be02d304789d15921b8897828495379f12766287831fa63] <==
	I1121 14:39:41.595401       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:39:41.603347       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:39:41.603407       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 14:39:41.605445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:41.610029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:39:41.610196       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:39:41.610338       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-441390_72d12fe5-95e5-44c8-883b-a595c002dd57!
	I1121 14:39:41.610329       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0f91ead1-6121-4cdc-bc84-4f904c438670", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-441390_72d12fe5-95e5-44c8-883b-a595c002dd57 became leader
	W1121 14:39:41.612122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:41.615812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:39:41.711126       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-441390_72d12fe5-95e5-44c8-883b-a595c002dd57!
	W1121 14:39:43.618555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:43.622980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:45.626191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:45.630074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:47.633096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:47.636724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:49.640041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:49.644233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:51.647792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:51.652754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:53.656480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:53.660430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-441390 -n embed-certs-441390
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-441390 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-589411 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-589411 --alsologtostderr -v=1: exit status 80 (2.332542746s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-589411 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:39:54.221437  262954 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:39:54.221733  262954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:39:54.221743  262954 out.go:374] Setting ErrFile to fd 2...
	I1121 14:39:54.221749  262954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:39:54.222023  262954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:39:54.222301  262954 out.go:368] Setting JSON to false
	I1121 14:39:54.222348  262954 mustload.go:66] Loading cluster: no-preload-589411
	I1121 14:39:54.222827  262954 config.go:182] Loaded profile config "no-preload-589411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:39:54.223371  262954 cli_runner.go:164] Run: docker container inspect no-preload-589411 --format={{.State.Status}}
	I1121 14:39:54.244041  262954 host.go:66] Checking if "no-preload-589411" exists ...
	I1121 14:39:54.244314  262954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:39:54.314833  262954 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-21 14:39:54.303100665 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:39:54.315555  262954 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-589411 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1121 14:39:54.318061  262954 out.go:179] * Pausing node no-preload-589411 ... 
	I1121 14:39:54.319161  262954 host.go:66] Checking if "no-preload-589411" exists ...
	I1121 14:39:54.319516  262954 ssh_runner.go:195] Run: systemctl --version
	I1121 14:39:54.319595  262954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589411
	I1121 14:39:54.340763  262954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/no-preload-589411/id_rsa Username:docker}
	I1121 14:39:54.439813  262954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:39:54.452147  262954 pause.go:52] kubelet running: true
	I1121 14:39:54.452197  262954 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:39:54.636392  262954 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:39:54.636490  262954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:39:54.710128  262954 cri.go:89] found id: "a6c5bc2e5adccf877569ec8c359d7f1cc50809152c4db1a39a7188aac936ef93"
	I1121 14:39:54.710151  262954 cri.go:89] found id: "ef511d5cab6bd6a19210f4020240515d2d470bc2d5e76d031d3fb82a1b0f13e5"
	I1121 14:39:54.710157  262954 cri.go:89] found id: "f18e8753eb0e613d6889e318b6e9ed46a29e321d61c64f6961a151ca05dfc2d3"
	I1121 14:39:54.710162  262954 cri.go:89] found id: "d03597d623b131413ff01082d5cde4837c45ee0cde1f38ed17fcd16f4ca1e79b"
	I1121 14:39:54.710167  262954 cri.go:89] found id: "34900b3fb77684bd3de1f265b714992b9499c98fd614328a74995aa089a0d576"
	I1121 14:39:54.710171  262954 cri.go:89] found id: "c834c6d0d4bb2461edebbdafedf6c4304e33b6167bb90599f1433f57e20da3ae"
	I1121 14:39:54.710175  262954 cri.go:89] found id: "a6f06b907ba722c73068717b14a501d45b65b1b50b019336f8f72e20f97d4877"
	I1121 14:39:54.710179  262954 cri.go:89] found id: "3168754e97e94add6d3faf44e9b1d45479c54acaeea9af4b8903986d660beead"
	I1121 14:39:54.710183  262954 cri.go:89] found id: "c243e5d767317fb718a9f77aecb00ce5ae279ec984417532538cae290c317a30"
	I1121 14:39:54.710200  262954 cri.go:89] found id: "6aeef65086fdad560a7dbaf32d497a70041b1fcc99047c64fe71950c6ba3d738"
	I1121 14:39:54.710209  262954 cri.go:89] found id: "a5ce31187d2431d019c4a829d693487a4499b5874adf7bc5244c94414888a880"
	I1121 14:39:54.710214  262954 cri.go:89] found id: ""
	I1121 14:39:54.710255  262954 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:39:54.723573  262954 retry.go:31] will retry after 151.977331ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:39:54Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:39:54.875951  262954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:39:54.888371  262954 pause.go:52] kubelet running: false
	I1121 14:39:54.888426  262954 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:39:55.065995  262954 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:39:55.066059  262954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:39:55.140381  262954 cri.go:89] found id: "a6c5bc2e5adccf877569ec8c359d7f1cc50809152c4db1a39a7188aac936ef93"
	I1121 14:39:55.140408  262954 cri.go:89] found id: "ef511d5cab6bd6a19210f4020240515d2d470bc2d5e76d031d3fb82a1b0f13e5"
	I1121 14:39:55.140414  262954 cri.go:89] found id: "f18e8753eb0e613d6889e318b6e9ed46a29e321d61c64f6961a151ca05dfc2d3"
	I1121 14:39:55.140418  262954 cri.go:89] found id: "d03597d623b131413ff01082d5cde4837c45ee0cde1f38ed17fcd16f4ca1e79b"
	I1121 14:39:55.140423  262954 cri.go:89] found id: "34900b3fb77684bd3de1f265b714992b9499c98fd614328a74995aa089a0d576"
	I1121 14:39:55.140427  262954 cri.go:89] found id: "c834c6d0d4bb2461edebbdafedf6c4304e33b6167bb90599f1433f57e20da3ae"
	I1121 14:39:55.140432  262954 cri.go:89] found id: "a6f06b907ba722c73068717b14a501d45b65b1b50b019336f8f72e20f97d4877"
	I1121 14:39:55.140436  262954 cri.go:89] found id: "3168754e97e94add6d3faf44e9b1d45479c54acaeea9af4b8903986d660beead"
	I1121 14:39:55.140440  262954 cri.go:89] found id: "c243e5d767317fb718a9f77aecb00ce5ae279ec984417532538cae290c317a30"
	I1121 14:39:55.140448  262954 cri.go:89] found id: "6aeef65086fdad560a7dbaf32d497a70041b1fcc99047c64fe71950c6ba3d738"
	I1121 14:39:55.140453  262954 cri.go:89] found id: "a5ce31187d2431d019c4a829d693487a4499b5874adf7bc5244c94414888a880"
	I1121 14:39:55.140457  262954 cri.go:89] found id: ""
	I1121 14:39:55.140510  262954 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:39:55.152006  262954 retry.go:31] will retry after 500.380198ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:39:55Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:39:55.652635  262954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:39:55.665763  262954 pause.go:52] kubelet running: false
	I1121 14:39:55.665817  262954 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:39:55.821982  262954 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:39:55.822048  262954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:39:55.893365  262954 cri.go:89] found id: "a6c5bc2e5adccf877569ec8c359d7f1cc50809152c4db1a39a7188aac936ef93"
	I1121 14:39:55.893391  262954 cri.go:89] found id: "ef511d5cab6bd6a19210f4020240515d2d470bc2d5e76d031d3fb82a1b0f13e5"
	I1121 14:39:55.893397  262954 cri.go:89] found id: "f18e8753eb0e613d6889e318b6e9ed46a29e321d61c64f6961a151ca05dfc2d3"
	I1121 14:39:55.893403  262954 cri.go:89] found id: "d03597d623b131413ff01082d5cde4837c45ee0cde1f38ed17fcd16f4ca1e79b"
	I1121 14:39:55.893407  262954 cri.go:89] found id: "34900b3fb77684bd3de1f265b714992b9499c98fd614328a74995aa089a0d576"
	I1121 14:39:55.893412  262954 cri.go:89] found id: "c834c6d0d4bb2461edebbdafedf6c4304e33b6167bb90599f1433f57e20da3ae"
	I1121 14:39:55.893417  262954 cri.go:89] found id: "a6f06b907ba722c73068717b14a501d45b65b1b50b019336f8f72e20f97d4877"
	I1121 14:39:55.893421  262954 cri.go:89] found id: "3168754e97e94add6d3faf44e9b1d45479c54acaeea9af4b8903986d660beead"
	I1121 14:39:55.893425  262954 cri.go:89] found id: "c243e5d767317fb718a9f77aecb00ce5ae279ec984417532538cae290c317a30"
	I1121 14:39:55.893432  262954 cri.go:89] found id: "6aeef65086fdad560a7dbaf32d497a70041b1fcc99047c64fe71950c6ba3d738"
	I1121 14:39:55.893440  262954 cri.go:89] found id: "a5ce31187d2431d019c4a829d693487a4499b5874adf7bc5244c94414888a880"
	I1121 14:39:55.893445  262954 cri.go:89] found id: ""
	I1121 14:39:55.893489  262954 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:39:55.904769  262954 retry.go:31] will retry after 314.047893ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:39:55Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:39:56.219288  262954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:39:56.242015  262954 pause.go:52] kubelet running: false
	I1121 14:39:56.242067  262954 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:39:56.399709  262954 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:39:56.399770  262954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:39:56.465938  262954 cri.go:89] found id: "a6c5bc2e5adccf877569ec8c359d7f1cc50809152c4db1a39a7188aac936ef93"
	I1121 14:39:56.465958  262954 cri.go:89] found id: "ef511d5cab6bd6a19210f4020240515d2d470bc2d5e76d031d3fb82a1b0f13e5"
	I1121 14:39:56.465964  262954 cri.go:89] found id: "f18e8753eb0e613d6889e318b6e9ed46a29e321d61c64f6961a151ca05dfc2d3"
	I1121 14:39:56.465968  262954 cri.go:89] found id: "d03597d623b131413ff01082d5cde4837c45ee0cde1f38ed17fcd16f4ca1e79b"
	I1121 14:39:56.465972  262954 cri.go:89] found id: "34900b3fb77684bd3de1f265b714992b9499c98fd614328a74995aa089a0d576"
	I1121 14:39:56.465977  262954 cri.go:89] found id: "c834c6d0d4bb2461edebbdafedf6c4304e33b6167bb90599f1433f57e20da3ae"
	I1121 14:39:56.465981  262954 cri.go:89] found id: "a6f06b907ba722c73068717b14a501d45b65b1b50b019336f8f72e20f97d4877"
	I1121 14:39:56.465984  262954 cri.go:89] found id: "3168754e97e94add6d3faf44e9b1d45479c54acaeea9af4b8903986d660beead"
	I1121 14:39:56.465988  262954 cri.go:89] found id: "c243e5d767317fb718a9f77aecb00ce5ae279ec984417532538cae290c317a30"
	I1121 14:39:56.465996  262954 cri.go:89] found id: "6aeef65086fdad560a7dbaf32d497a70041b1fcc99047c64fe71950c6ba3d738"
	I1121 14:39:56.466001  262954 cri.go:89] found id: "a5ce31187d2431d019c4a829d693487a4499b5874adf7bc5244c94414888a880"
	I1121 14:39:56.466005  262954 cri.go:89] found id: ""
	I1121 14:39:56.466040  262954 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:39:56.478644  262954 out.go:203] 
	W1121 14:39:56.479781  262954 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:39:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:39:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 14:39:56.479797  262954 out.go:285] * 
	* 
	W1121 14:39:56.483632  262954 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 14:39:56.484780  262954 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-589411 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-589411
helpers_test.go:243: (dbg) docker inspect no-preload-589411:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ba122d6d7a1ce2d4fd919d9499f2c4fb5076fc5a90cce1344ec358d2e5bfc45",
	        "Created": "2025-11-21T14:37:40.849517293Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 251917,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:38:55.650850299Z",
	            "FinishedAt": "2025-11-21T14:38:54.682608242Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/2ba122d6d7a1ce2d4fd919d9499f2c4fb5076fc5a90cce1344ec358d2e5bfc45/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ba122d6d7a1ce2d4fd919d9499f2c4fb5076fc5a90cce1344ec358d2e5bfc45/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ba122d6d7a1ce2d4fd919d9499f2c4fb5076fc5a90cce1344ec358d2e5bfc45/hosts",
	        "LogPath": "/var/lib/docker/containers/2ba122d6d7a1ce2d4fd919d9499f2c4fb5076fc5a90cce1344ec358d2e5bfc45/2ba122d6d7a1ce2d4fd919d9499f2c4fb5076fc5a90cce1344ec358d2e5bfc45-json.log",
	        "Name": "/no-preload-589411",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-589411:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-589411",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2ba122d6d7a1ce2d4fd919d9499f2c4fb5076fc5a90cce1344ec358d2e5bfc45",
	                "LowerDir": "/var/lib/docker/overlay2/5b439a772b1bafc04ec7400efb1953394a63935256474aa83fdd49a49549b264-init/diff:/var/lib/docker/overlay2/52a35d389e2d282a009a54c52e3f2c3f22a4d7ab5a2644a29d16127d21682576/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b439a772b1bafc04ec7400efb1953394a63935256474aa83fdd49a49549b264/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b439a772b1bafc04ec7400efb1953394a63935256474aa83fdd49a49549b264/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b439a772b1bafc04ec7400efb1953394a63935256474aa83fdd49a49549b264/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-589411",
	                "Source": "/var/lib/docker/volumes/no-preload-589411/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-589411",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-589411",
	                "name.minikube.sigs.k8s.io": "no-preload-589411",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "450affadb9146c11a9249b5d32dcb199f98ff92e2191e1f2bd37f92de37d70b0",
	            "SandboxKey": "/var/run/docker/netns/450affadb914",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-589411": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "16216427221de4c7c427a254dcd5d0745c57cde4857ab5c433751b20e1dda883",
	                    "EndpointID": "1ab410781c6734ac1b1e596db10964d19ad351d7d723d7a658d45a0d93a8c334",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "56:13:95:9e:0f:69",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-589411",
	                        "2ba122d6d7a1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-589411 -n no-preload-589411
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-589411 -n no-preload-589411: exit status 2 (373.353535ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-589411 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-589411 logs -n 25: (1.226626549s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p cert-options-116734                                                                                                                                                                                                                        │ cert-options-116734       │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:36 UTC │
	│ start   │ -p old-k8s-version-794941 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:36 UTC │ 21 Nov 25 14:37 UTC │
	│ start   │ -p pause-738756 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-738756              │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:37 UTC │
	│ pause   │ -p pause-738756 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-738756              │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │                     │
	│ delete  │ -p pause-738756                                                                                                                                                                                                                               │ pause-738756              │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:37 UTC │
	│ start   │ -p no-preload-589411 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589411         │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:38 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-794941 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │                     │
	│ stop    │ -p old-k8s-version-794941 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:38 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-794941 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ start   │ -p old-k8s-version-794941 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ addons  │ enable metrics-server -p no-preload-589411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-589411         │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │                     │
	│ stop    │ -p no-preload-589411 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-589411         │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ image   │ old-k8s-version-794941 image list --format=json                                                                                                                                                                                               │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ pause   │ -p old-k8s-version-794941 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-589411 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-589411         │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ start   │ -p no-preload-589411 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589411         │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:39 UTC │
	│ delete  │ -p old-k8s-version-794941                                                                                                                                                                                                                     │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:39 UTC │
	│ delete  │ -p old-k8s-version-794941                                                                                                                                                                                                                     │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:39 UTC │
	│ start   │ -p embed-certs-441390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-441390        │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:39 UTC │
	│ start   │ -p kubernetes-upgrade-214044 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-214044 │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ start   │ -p kubernetes-upgrade-214044 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-214044 │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-441390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-441390        │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ image   │ no-preload-589411 image list --format=json                                                                                                                                                                                                    │ no-preload-589411         │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:39 UTC │
	│ pause   │ -p no-preload-589411 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-589411         │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ stop    │ -p embed-certs-441390 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-441390        │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:39:52
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:39:52.496919  261994 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:39:52.497049  261994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:39:52.497060  261994 out.go:374] Setting ErrFile to fd 2...
	I1121 14:39:52.497068  261994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:39:52.497253  261994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:39:52.497811  261994 out.go:368] Setting JSON to false
	I1121 14:39:52.499018  261994 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4941,"bootTime":1763731051,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:39:52.499071  261994 start.go:143] virtualization: kvm guest
	I1121 14:39:52.500413  261994 out.go:179] * [kubernetes-upgrade-214044] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:39:52.501850  261994 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:39:52.501926  261994 notify.go:221] Checking for updates...
	I1121 14:39:52.504177  261994 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:39:52.505422  261994 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:39:52.506547  261994 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 14:39:52.507529  261994 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:39:52.508630  261994 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:39:52.510015  261994 config.go:182] Loaded profile config "kubernetes-upgrade-214044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:39:52.510423  261994 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:39:52.537118  261994 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:39:52.537204  261994 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:39:52.596709  261994 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-21 14:39:52.587095337 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:39:52.596802  261994 docker.go:319] overlay module found
	I1121 14:39:52.598329  261994 out.go:179] * Using the docker driver based on existing profile
	I1121 14:39:52.599329  261994 start.go:309] selected driver: docker
	I1121 14:39:52.599349  261994 start.go:930] validating driver "docker" against &{Name:kubernetes-upgrade-214044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-214044 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:39:52.599455  261994 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:39:52.600254  261994 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:39:52.665926  261994 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-21 14:39:52.655707888 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:39:52.666306  261994 cni.go:84] Creating CNI manager for ""
	I1121 14:39:52.666380  261994 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:39:52.666485  261994 start.go:353] cluster config:
	{Name:kubernetes-upgrade-214044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-214044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:39:52.668839  261994 out.go:179] * Starting "kubernetes-upgrade-214044" primary control-plane node in "kubernetes-upgrade-214044" cluster
	I1121 14:39:52.670018  261994 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:39:52.671151  261994 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:39:52.672508  261994 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:39:52.672544  261994 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1121 14:39:52.672573  261994 cache.go:65] Caching tarball of preloaded images
	I1121 14:39:52.672624  261994 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:39:52.672694  261994 preload.go:238] Found /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1121 14:39:52.672711  261994 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 14:39:52.672836  261994 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/kubernetes-upgrade-214044/config.json ...
	I1121 14:39:52.695148  261994 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:39:52.695169  261994 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:39:52.695189  261994 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:39:52.695224  261994 start.go:360] acquireMachinesLock for kubernetes-upgrade-214044: {Name:mk6cde14689d14fd2f260e673006d87e7504059f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:39:52.695285  261994 start.go:364] duration metric: took 38.177µs to acquireMachinesLock for "kubernetes-upgrade-214044"
	I1121 14:39:52.695305  261994 start.go:96] Skipping create...Using existing machine configuration
	I1121 14:39:52.695313  261994 fix.go:54] fixHost starting: 
	I1121 14:39:52.695625  261994 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-214044 --format={{.State.Status}}
	I1121 14:39:52.712428  261994 fix.go:112] recreateIfNeeded on kubernetes-upgrade-214044: state=Running err=<nil>
	W1121 14:39:52.712452  261994 fix.go:138] unexpected machine state, will restart: <nil>
	I1121 14:39:52.713784  261994 out.go:252] * Updating the running docker "kubernetes-upgrade-214044" container ...
	I1121 14:39:52.713811  261994 machine.go:94] provisionDockerMachine start ...
	I1121 14:39:52.713879  261994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-214044
	I1121 14:39:52.730906  261994 main.go:143] libmachine: Using SSH client type: native
	I1121 14:39:52.731180  261994 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1121 14:39:52.731197  261994 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:39:52.861328  261994 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-214044
	
	I1121 14:39:52.861353  261994 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-214044"
	I1121 14:39:52.861411  261994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-214044
	I1121 14:39:52.881437  261994 main.go:143] libmachine: Using SSH client type: native
	I1121 14:39:52.881752  261994 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1121 14:39:52.881775  261994 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-214044 && echo "kubernetes-upgrade-214044" | sudo tee /etc/hostname
	I1121 14:39:53.027414  261994 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-214044
	
	I1121 14:39:53.027494  261994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-214044
	I1121 14:39:53.047103  261994 main.go:143] libmachine: Using SSH client type: native
	I1121 14:39:53.047382  261994 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1121 14:39:53.047408  261994 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-214044' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-214044/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-214044' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:39:53.179847  261994 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:39:53.179880  261994 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11045/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11045/.minikube}
	I1121 14:39:53.179925  261994 ubuntu.go:190] setting up certificates
	I1121 14:39:53.179942  261994 provision.go:84] configureAuth start
	I1121 14:39:53.180008  261994 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-214044
	I1121 14:39:53.200269  261994 provision.go:143] copyHostCerts
	I1121 14:39:53.200340  261994 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem, removing ...
	I1121 14:39:53.200356  261994 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem
	I1121 14:39:53.200433  261994 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem (1679 bytes)
	I1121 14:39:53.200549  261994 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem, removing ...
	I1121 14:39:53.200589  261994 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem
	I1121 14:39:53.200649  261994 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem (1078 bytes)
	I1121 14:39:53.200738  261994 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem, removing ...
	I1121 14:39:53.200748  261994 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem
	I1121 14:39:53.200791  261994 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem (1123 bytes)
	I1121 14:39:53.200862  261994 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-214044 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-214044 localhost minikube]
	I1121 14:39:53.427478  261994 provision.go:177] copyRemoteCerts
	I1121 14:39:53.427541  261994 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:39:53.427602  261994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-214044
	I1121 14:39:53.446505  261994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/kubernetes-upgrade-214044/id_rsa Username:docker}
	I1121 14:39:53.542096  261994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:39:53.559337  261994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1121 14:39:53.577486  261994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:39:53.594621  261994 provision.go:87] duration metric: took 414.662936ms to configureAuth
	I1121 14:39:53.594642  261994 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:39:53.594831  261994 config.go:182] Loaded profile config "kubernetes-upgrade-214044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:39:53.594953  261994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-214044
	I1121 14:39:53.614027  261994 main.go:143] libmachine: Using SSH client type: native
	I1121 14:39:53.614227  261994 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1121 14:39:53.614250  261994 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:39:54.183117  261994 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:39:54.183143  261994 machine.go:97] duration metric: took 1.469323517s to provisionDockerMachine
	I1121 14:39:54.183154  261994 start.go:293] postStartSetup for "kubernetes-upgrade-214044" (driver="docker")
	I1121 14:39:54.183167  261994 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:39:54.183236  261994 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:39:54.183286  261994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-214044
	I1121 14:39:54.205463  261994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/kubernetes-upgrade-214044/id_rsa Username:docker}
	I1121 14:39:54.307420  261994 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:39:54.311742  261994 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:39:54.311776  261994 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:39:54.311786  261994 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/addons for local assets ...
	I1121 14:39:54.311838  261994 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/files for local assets ...
	I1121 14:39:54.311941  261994 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem -> 145422.pem in /etc/ssl/certs
	I1121 14:39:54.312071  261994 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:39:54.320958  261994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:39:54.342253  261994 start.go:296] duration metric: took 159.04449ms for postStartSetup
	I1121 14:39:54.342341  261994 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:39:54.342385  261994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-214044
	I1121 14:39:54.366487  261994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/kubernetes-upgrade-214044/id_rsa Username:docker}
	I1121 14:39:54.460244  261994 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:39:54.464604  261994 fix.go:56] duration metric: took 1.769284918s for fixHost
	I1121 14:39:54.464633  261994 start.go:83] releasing machines lock for "kubernetes-upgrade-214044", held for 1.769335593s
	I1121 14:39:54.464696  261994 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-214044
	I1121 14:39:54.482629  261994 ssh_runner.go:195] Run: cat /version.json
	I1121 14:39:54.482682  261994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-214044
	I1121 14:39:54.482712  261994 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:39:54.482778  261994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-214044
	I1121 14:39:54.507911  261994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/kubernetes-upgrade-214044/id_rsa Username:docker}
	I1121 14:39:54.510865  261994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/kubernetes-upgrade-214044/id_rsa Username:docker}
	I1121 14:39:54.604623  261994 ssh_runner.go:195] Run: systemctl --version
	I1121 14:39:54.693824  261994 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:39:54.737696  261994 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:39:54.742680  261994 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:39:54.742726  261994 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:39:54.751140  261994 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 14:39:54.751160  261994 start.go:496] detecting cgroup driver to use...
	I1121 14:39:54.751186  261994 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:39:54.751231  261994 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:39:54.766204  261994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:39:54.778908  261994 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:39:54.778961  261994 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:39:54.795182  261994 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:39:54.808250  261994 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:39:54.912798  261994 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:39:55.038695  261994 docker.go:234] disabling docker service ...
	I1121 14:39:55.038764  261994 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:39:55.054802  261994 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:39:55.068095  261994 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:39:55.172771  261994 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:39:55.268330  261994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:39:55.280235  261994 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:39:55.293875  261994 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 14:39:55.293928  261994 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:39:55.302188  261994 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1121 14:39:55.302244  261994 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:39:55.310349  261994 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:39:55.318207  261994 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:39:55.326406  261994 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:39:55.333961  261994 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:39:55.342280  261994 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:39:55.350082  261994 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:39:55.357914  261994 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:39:55.364883  261994 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:39:55.371447  261994 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:39:55.463372  261994 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:39:55.633886  261994 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:39:55.633954  261994 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:39:55.638931  261994 start.go:564] Will wait 60s for crictl version
	I1121 14:39:55.639020  261994 ssh_runner.go:195] Run: which crictl
	I1121 14:39:55.642989  261994 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:39:55.669444  261994 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:39:55.669521  261994 ssh_runner.go:195] Run: crio --version
	I1121 14:39:55.702592  261994 ssh_runner.go:195] Run: crio --version
	I1121 14:39:55.738902  261994 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 14:39:55.740094  261994 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-214044 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:39:55.757657  261994 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1121 14:39:55.761885  261994 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-214044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-214044 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:39:55.762030  261994 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:39:55.762088  261994 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:39:55.793449  261994 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:39:55.793472  261994 crio.go:433] Images already preloaded, skipping extraction
	I1121 14:39:55.793520  261994 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:39:55.817379  261994 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:39:55.817396  261994 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:39:55.817403  261994 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1121 14:39:55.817499  261994 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-214044 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-214044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:39:55.817551  261994 ssh_runner.go:195] Run: crio config
	I1121 14:39:55.874027  261994 cni.go:84] Creating CNI manager for ""
	I1121 14:39:55.874050  261994 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:39:55.874064  261994 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:39:55.874085  261994 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-214044 NodeName:kubernetes-upgrade-214044 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:39:55.874206  261994 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-214044"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:39:55.874260  261994 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:39:55.882527  261994 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:39:55.882613  261994 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:39:55.891339  261994 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1121 14:39:55.903995  261994 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:39:55.915698  261994 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1121 14:39:55.927514  261994 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:39:55.931055  261994 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:39:56.032397  261994 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:39:56.046939  261994 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/kubernetes-upgrade-214044 for IP: 192.168.76.2
	I1121 14:39:56.046962  261994 certs.go:195] generating shared ca certs ...
	I1121 14:39:56.046981  261994 certs.go:227] acquiring lock for ca certs: {Name:mkde3a7d6f17b238f06eab3a140993599f1b4367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:39:56.047141  261994 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key
	I1121 14:39:56.047198  261994 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key
	I1121 14:39:56.047213  261994 certs.go:257] generating profile certs ...
	I1121 14:39:56.047351  261994 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/kubernetes-upgrade-214044/client.key
	I1121 14:39:56.047448  261994 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/kubernetes-upgrade-214044/apiserver.key.8e1a5394
	I1121 14:39:56.047503  261994 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/kubernetes-upgrade-214044/proxy-client.key
	I1121 14:39:56.047663  261994 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem (1338 bytes)
	W1121 14:39:56.047704  261994 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542_empty.pem, impossibly tiny 0 bytes
	I1121 14:39:56.047721  261994 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:39:56.047753  261994 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:39:56.047839  261994 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:39:56.047876  261994 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem (1679 bytes)
	I1121 14:39:56.047938  261994 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:39:56.048693  261994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:39:56.068527  261994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:39:56.088938  261994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:39:56.106191  261994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1121 14:39:56.123803  261994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/kubernetes-upgrade-214044/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1121 14:39:56.142800  261994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/kubernetes-upgrade-214044/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 14:39:56.161227  261994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/kubernetes-upgrade-214044/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:39:56.178304  261994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/kubernetes-upgrade-214044/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:39:56.196459  261994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /usr/share/ca-certificates/145422.pem (1708 bytes)
	I1121 14:39:56.214249  261994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:39:56.232897  261994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem --> /usr/share/ca-certificates/14542.pem (1338 bytes)
	I1121 14:39:56.256884  261994 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:39:56.270194  261994 ssh_runner.go:195] Run: openssl version
	I1121 14:39:56.277377  261994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145422.pem && ln -fs /usr/share/ca-certificates/145422.pem /etc/ssl/certs/145422.pem"
	I1121 14:39:56.291750  261994 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145422.pem
	I1121 14:39:56.295762  261994 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145422.pem
	I1121 14:39:56.295817  261994 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145422.pem
	I1121 14:39:56.332130  261994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145422.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:39:56.341137  261994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:39:56.351841  261994 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:39:56.355796  261994 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:39:56.355861  261994 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:39:56.391505  261994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:39:56.399694  261994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14542.pem && ln -fs /usr/share/ca-certificates/14542.pem /etc/ssl/certs/14542.pem"
	I1121 14:39:56.408065  261994 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14542.pem
	I1121 14:39:56.411645  261994 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14542.pem
	I1121 14:39:56.411690  261994 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14542.pem
	I1121 14:39:56.452906  261994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14542.pem /etc/ssl/certs/51391683.0"
	I1121 14:39:56.461510  261994 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:39:56.465543  261994 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 14:39:56.507395  261994 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 14:39:56.551298  261994 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 14:39:56.589384  261994 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 14:39:56.629785  261994 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 14:39:56.669905  261994 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 14:39:56.709262  261994 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-214044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-214044 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:39:56.709366  261994 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:39:56.709429  261994 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:39:56.740449  261994 cri.go:89] found id: "1cf111e0d5bd45fa66a08420616fa7af0373333311b61302801d50103e62c80e"
	I1121 14:39:56.740477  261994 cri.go:89] found id: "e3d95e8f6a843d38ba5ded0916be7e08649a3f61521b352a847d4d6e6ff78f4f"
	I1121 14:39:56.740483  261994 cri.go:89] found id: "02ec1491e2f4bbbd7484b47a46ce9e246f4960d34a894ea80b4ccea563c50a53"
	I1121 14:39:56.740488  261994 cri.go:89] found id: "bacbffb589e0eb590e253aab33fd2c2f2e5fed6114a4a4d0bddc67ce01eefd8d"
	I1121 14:39:56.740492  261994 cri.go:89] found id: ""
	I1121 14:39:56.740534  261994 ssh_runner.go:195] Run: sudo runc list -f json
	W1121 14:39:56.753253  261994 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:39:56Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:39:56.753343  261994 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:39:56.762347  261994 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 14:39:56.762370  261994 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 14:39:56.762424  261994 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 14:39:56.771224  261994 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:39:56.772696  261994 kubeconfig.go:125] found "kubernetes-upgrade-214044" server: "https://192.168.76.2:8443"
	I1121 14:39:56.774404  261994 kapi.go:59] client config for kubernetes-upgrade-214044: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21847-11045/.minikube/profiles/kubernetes-upgrade-214044/client.crt", KeyFile:"/home/jenkins/minikube-integration/21847-11045/.minikube/profiles/kubernetes-upgrade-214044/client.key", CAFile:"/home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1121 14:39:56.774862  261994 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1121 14:39:56.774882  261994 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1121 14:39:56.774887  261994 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1121 14:39:56.774891  261994 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1121 14:39:56.774897  261994 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1121 14:39:56.775204  261994 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 14:39:56.783871  261994 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1121 14:39:56.783901  261994 kubeadm.go:602] duration metric: took 21.524283ms to restartPrimaryControlPlane
	I1121 14:39:56.783912  261994 kubeadm.go:403] duration metric: took 74.660861ms to StartCluster
	I1121 14:39:56.783928  261994 settings.go:142] acquiring lock: {Name:mkb207cf001a407898b2dbfd9fb9b3881f173a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:39:56.783994  261994 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:39:56.786640  261994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:39:56.786934  261994 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:39:56.787274  261994 config.go:182] Loaded profile config "kubernetes-upgrade-214044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:39:56.787317  261994 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:39:56.787398  261994 addons.go:70] Setting storage-provisioner=true in profile "kubernetes-upgrade-214044"
	I1121 14:39:56.787416  261994 addons.go:239] Setting addon storage-provisioner=true in "kubernetes-upgrade-214044"
	W1121 14:39:56.787428  261994 addons.go:248] addon storage-provisioner should already be in state true
	I1121 14:39:56.787455  261994 host.go:66] Checking if "kubernetes-upgrade-214044" exists ...
	I1121 14:39:56.787740  261994 addons.go:70] Setting default-storageclass=true in profile "kubernetes-upgrade-214044"
	I1121 14:39:56.787779  261994 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-214044"
	I1121 14:39:56.787957  261994 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-214044 --format={{.State.Status}}
	I1121 14:39:56.788151  261994 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-214044 --format={{.State.Status}}
	I1121 14:39:56.788935  261994 out.go:179] * Verifying Kubernetes components...
	I1121 14:39:56.790276  261994 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:39:56.816552  261994 kapi.go:59] client config for kubernetes-upgrade-214044: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21847-11045/.minikube/profiles/kubernetes-upgrade-214044/client.crt", KeyFile:"/home/jenkins/minikube-integration/21847-11045/.minikube/profiles/kubernetes-upgrade-214044/client.key", CAFile:"/home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1121 14:39:56.816896  261994 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Nov 21 14:39:36 no-preload-589411 crio[566]: time="2025-11-21T14:39:36.159226644Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f4c1a60e-d116-4c51-a0fb-fecc24735b13 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:39:36 no-preload-589411 crio[566]: time="2025-11-21T14:39:36.160226777Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d51a8c32-acbf-4ddd-84c0-43173d3917da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:39:36 no-preload-589411 crio[566]: time="2025-11-21T14:39:36.160360589Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:39:36 no-preload-589411 crio[566]: time="2025-11-21T14:39:36.165005342Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:39:36 no-preload-589411 crio[566]: time="2025-11-21T14:39:36.165135707Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c752e60adc3af0ad327883bd6c48e1cb1055ca6c4b9c8612155791da9ec211e4/merged/etc/passwd: no such file or directory"
	Nov 21 14:39:36 no-preload-589411 crio[566]: time="2025-11-21T14:39:36.165157465Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c752e60adc3af0ad327883bd6c48e1cb1055ca6c4b9c8612155791da9ec211e4/merged/etc/group: no such file or directory"
	Nov 21 14:39:36 no-preload-589411 crio[566]: time="2025-11-21T14:39:36.165356075Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:39:36 no-preload-589411 crio[566]: time="2025-11-21T14:39:36.201665977Z" level=info msg="Created container a6c5bc2e5adccf877569ec8c359d7f1cc50809152c4db1a39a7188aac936ef93: kube-system/storage-provisioner/storage-provisioner" id=d51a8c32-acbf-4ddd-84c0-43173d3917da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:39:36 no-preload-589411 crio[566]: time="2025-11-21T14:39:36.202196192Z" level=info msg="Starting container: a6c5bc2e5adccf877569ec8c359d7f1cc50809152c4db1a39a7188aac936ef93" id=b52968c2-bf58-4b82-8ebc-1518b0ef6e9c name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:39:36 no-preload-589411 crio[566]: time="2025-11-21T14:39:36.203828841Z" level=info msg="Started container" PID=1705 containerID=a6c5bc2e5adccf877569ec8c359d7f1cc50809152c4db1a39a7188aac936ef93 description=kube-system/storage-provisioner/storage-provisioner id=b52968c2-bf58-4b82-8ebc-1518b0ef6e9c name=/runtime.v1.RuntimeService/StartContainer sandboxID=74aa257caa83ddd55787a593a0270be3a259066e68c835f47234eb50660fa0c7
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.776934744Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.78099093Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.781011745Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.781028523Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.784483417Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.784504006Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.784519973Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.788134635Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.788168482Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.788183199Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.79133425Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.791351905Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.791368932Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.794585117Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.794606858Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a6c5bc2e5adcc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   74aa257caa83d       storage-provisioner                          kube-system
	6aeef65086fda       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago      Exited              dashboard-metrics-scraper   2                   97041785d9272       dashboard-metrics-scraper-6ffb444bf9-q5hp6   kubernetes-dashboard
	a5ce31187d243       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago      Running             kubernetes-dashboard        0                   fe22143f1216f       kubernetes-dashboard-855c9754f9-hc2j2        kubernetes-dashboard
	ef511d5cab6bd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   1dbe83d30878a       coredns-66bc5c9577-db94z                     kube-system
	b7517f4fdccc7       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   e6cc1b466f00d       busybox                                      default
	f18e8753eb0e6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   74aa257caa83d       storage-provisioner                          kube-system
	d03597d623b13       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   2f237ed59bff7       kube-proxy-qhp5d                             kube-system
	34900b3fb7768       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   0e6e5fd93068b       kindnet-h7k2r                                kube-system
	c834c6d0d4bb2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   6e275ab09475a       kube-scheduler-no-preload-589411             kube-system
	a6f06b907ba72       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   c3a4b02bf6f51       kube-controller-manager-no-preload-589411    kube-system
	3168754e97e94       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   9422674c00b7e       kube-apiserver-no-preload-589411             kube-system
	c243e5d767317       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   c2089a495f230       etcd-no-preload-589411                       kube-system
	
	
	==> coredns [ef511d5cab6bd6a19210f4020240515d2d470bc2d5e76d031d3fb82a1b0f13e5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:48817 - 21556 "HINFO IN 1772689697979126168.352344727929161197. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.115018342s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-589411
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-589411
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=no-preload-589411
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_38_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:38:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-589411
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:39:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:39:35 +0000   Fri, 21 Nov 2025 14:38:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:39:35 +0000   Fri, 21 Nov 2025 14:38:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:39:35 +0000   Fri, 21 Nov 2025 14:38:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:39:35 +0000   Fri, 21 Nov 2025 14:38:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-589411
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                8c0c3626-fd96-4939-aead-166c796faa08
	  Boot ID:                    4deed74e-d1ae-403f-b303-7338c681df31
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-db94z                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-no-preload-589411                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-h7k2r                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-no-preload-589411              250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-no-preload-589411     200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-qhp5d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-no-preload-589411              100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-q5hp6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hc2j2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)  kubelet          Node no-preload-589411 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)  kubelet          Node no-preload-589411 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x8 over 115s)  kubelet          Node no-preload-589411 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    111s                 kubelet          Node no-preload-589411 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  111s                 kubelet          Node no-preload-589411 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     111s                 kubelet          Node no-preload-589411 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s                 node-controller  Node no-preload-589411 event: Registered Node no-preload-589411 in Controller
	  Normal  NodeReady                92s                  kubelet          Node no-preload-589411 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node no-preload-589411 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node no-preload-589411 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node no-preload-589411 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                  node-controller  Node no-preload-589411 event: Registered Node no-preload-589411 in Controller
	
	
	==> dmesg <==
	[  +0.087005] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024870] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.407014] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 13:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.052324] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023964] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +2.047715] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +4.031557] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +8.511113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[ +16.382337] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[Nov21 13:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	
	
	==> etcd [c243e5d767317fb718a9f77aecb00ce5ae279ec984417532538cae290c317a30] <==
	{"level":"info","ts":"2025-11-21T14:39:05.924121Z","caller":"traceutil/trace.go:172","msg":"trace[1351447124] range","detail":"{range_begin:/registry/clusterroles/system:heapster; range_end:; response_count:1; response_revision:503; }","duration":"140.961208ms","start":"2025-11-21T14:39:05.783152Z","end":"2025-11-21T14:39:05.924113Z","steps":["trace[1351447124] 'agreement among raft nodes before linearized reading'  (duration: 140.82113ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T14:39:05.924122Z","caller":"traceutil/trace.go:172","msg":"trace[944089475] transaction","detail":"{read_only:false; response_revision:504; number_of_response:1; }","duration":"143.276227ms","start":"2025-11-21T14:39:05.780829Z","end":"2025-11-21T14:39:05.924106Z","steps":["trace[944089475] 'process raft request'  (duration: 143.148476ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-21T14:39:06.137028Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.694739ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:attachdetach-controller\" limit:1 ","response":"range_response_count:1 size:767"}
	{"level":"info","ts":"2025-11-21T14:39:06.137099Z","caller":"traceutil/trace.go:172","msg":"trace[1052132736] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:attachdetach-controller; range_end:; response_count:1; response_revision:504; }","duration":"113.776607ms","start":"2025-11-21T14:39:06.023308Z","end":"2025-11-21T14:39:06.137085Z","steps":["trace[1052132736] 'range keys from in-memory index tree'  (duration: 113.522693ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T14:39:06.295901Z","caller":"traceutil/trace.go:172","msg":"trace[912518461] linearizableReadLoop","detail":"{readStateIndex:532; appliedIndex:532; }","duration":"131.742906ms","start":"2025-11-21T14:39:06.164134Z","end":"2025-11-21T14:39:06.295877Z","steps":["trace[912518461] 'read index received'  (duration: 131.735722ms)","trace[912518461] 'applied index is now lower than readState.Index'  (duration: 6.465µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-21T14:39:06.296038Z","caller":"traceutil/trace.go:172","msg":"trace[1716672919] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"132.530401ms","start":"2025-11-21T14:39:06.163497Z","end":"2025-11-21T14:39:06.296027Z","steps":["trace[1716672919] 'process raft request'  (duration: 132.412671ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-21T14:39:06.296063Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.903764ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:resource-claim-controller\" limit:1 ","response":"range_response_count:1 size:775"}
	{"level":"info","ts":"2025-11-21T14:39:06.296103Z","caller":"traceutil/trace.go:172","msg":"trace[1807869704] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:resource-claim-controller; range_end:; response_count:1; response_revision:505; }","duration":"131.966297ms","start":"2025-11-21T14:39:06.164127Z","end":"2025-11-21T14:39:06.296093Z","steps":["trace[1807869704] 'agreement among raft nodes before linearized reading'  (duration: 131.808312ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T14:39:06.556731Z","caller":"traceutil/trace.go:172","msg":"trace[887045539] linearizableReadLoop","detail":"{readStateIndex:533; appliedIndex:533; }","duration":"252.052984ms","start":"2025-11-21T14:39:06.304640Z","end":"2025-11-21T14:39:06.556693Z","steps":["trace[887045539] 'read index received'  (duration: 252.045727ms)","trace[887045539] 'applied index is now lower than readState.Index'  (duration: 6.419µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T14:39:06.687640Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"382.972806ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:persistent-volume-binder\" limit:1 ","response":"range_response_count:1 size:771"}
	{"level":"info","ts":"2025-11-21T14:39:06.687705Z","caller":"traceutil/trace.go:172","msg":"trace[1877397099] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:persistent-volume-binder; range_end:; response_count:1; response_revision:506; }","duration":"383.053395ms","start":"2025-11-21T14:39:06.304634Z","end":"2025-11-21T14:39:06.687688Z","steps":["trace[1877397099] 'agreement among raft nodes before linearized reading'  (duration: 252.149749ms)","trace[1877397099] 'range keys from in-memory index tree'  (duration: 130.736931ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T14:39:06.687746Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-21T14:39:06.304627Z","time spent":"383.106663ms","remote":"127.0.0.1:53036","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":1,"response size":794,"request content":"key:\"/registry/clusterrolebindings/system:controller:persistent-volume-binder\" limit:1 "}
	{"level":"warn","ts":"2025-11-21T14:39:06.688111Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.923941ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597211987367770 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-qhp5d\" mod_revision:494 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-qhp5d\" value_size:4977 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-qhp5d\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-21T14:39:06.688179Z","caller":"traceutil/trace.go:172","msg":"trace[517875784] transaction","detail":"{read_only:false; response_revision:507; number_of_response:1; }","duration":"384.216912ms","start":"2025-11-21T14:39:06.303950Z","end":"2025-11-21T14:39:06.688167Z","steps":["trace[517875784] 'process raft request'  (duration: 252.78005ms)","trace[517875784] 'compare'  (duration: 130.810023ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T14:39:06.688250Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-21T14:39:06.303935Z","time spent":"384.270575ms","remote":"127.0.0.1:52684","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5028,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-qhp5d\" mod_revision:494 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-qhp5d\" value_size:4977 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-qhp5d\" > >"}
	{"level":"info","ts":"2025-11-21T14:39:06.837628Z","caller":"traceutil/trace.go:172","msg":"trace[1831644993] linearizableReadLoop","detail":"{readStateIndex:534; appliedIndex:534; }","duration":"139.564832ms","start":"2025-11-21T14:39:06.698044Z","end":"2025-11-21T14:39:06.837609Z","steps":["trace[1831644993] 'read index received'  (duration: 139.558727ms)","trace[1831644993] 'applied index is now lower than readState.Index'  (duration: 5.362µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T14:39:06.985159Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"287.085092ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:service-controller\" limit:1 ","response":"range_response_count:1 size:747"}
	{"level":"info","ts":"2025-11-21T14:39:06.985222Z","caller":"traceutil/trace.go:172","msg":"trace[384454277] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:service-controller; range_end:; response_count:1; response_revision:507; }","duration":"287.16716ms","start":"2025-11-21T14:39:06.698042Z","end":"2025-11-21T14:39:06.985209Z","steps":["trace[384454277] 'agreement among raft nodes before linearized reading'  (duration: 139.659856ms)","trace[384454277] 'range keys from in-memory index tree'  (duration: 147.320383ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T14:39:06.985254Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"147.482593ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597211987367779 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-db94z\" mod_revision:497 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-db94z\" value_size:5859 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-db94z\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-21T14:39:06.985333Z","caller":"traceutil/trace.go:172","msg":"trace[531759135] linearizableReadLoop","detail":"{readStateIndex:535; appliedIndex:534; }","duration":"147.645761ms","start":"2025-11-21T14:39:06.837677Z","end":"2025-11-21T14:39:06.985323Z","steps":["trace[531759135] 'read index received'  (duration: 40.116µs)","trace[531759135] 'applied index is now lower than readState.Index'  (duration: 147.604818ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-21T14:39:06.985359Z","caller":"traceutil/trace.go:172","msg":"trace[682881008] transaction","detail":"{read_only:false; response_revision:508; number_of_response:1; }","duration":"287.891529ms","start":"2025-11-21T14:39:06.697443Z","end":"2025-11-21T14:39:06.985335Z","steps":["trace[682881008] 'process raft request'  (duration: 140.26124ms)","trace[682881008] 'compare'  (duration: 147.362549ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T14:39:06.985398Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"252.716272ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-21T14:39:06.985419Z","caller":"traceutil/trace.go:172","msg":"trace[2142754160] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:508; }","duration":"252.737936ms","start":"2025-11-21T14:39:06.732674Z","end":"2025-11-21T14:39:06.985412Z","steps":["trace[2142754160] 'agreement among raft nodes before linearized reading'  (duration: 252.683781ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-21T14:39:07.315817Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.348736ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.85.2\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-21T14:39:07.315870Z","caller":"traceutil/trace.go:172","msg":"trace[694863912] range","detail":"{range_begin:/registry/masterleases/192.168.85.2; range_end:; response_count:0; response_revision:509; }","duration":"182.416868ms","start":"2025-11-21T14:39:07.133441Z","end":"2025-11-21T14:39:07.315858Z","steps":["trace[694863912] 'range keys from in-memory index tree'  (duration: 182.286039ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:39:57 up  1:22,  0 user,  load average: 2.52, 2.43, 1.68
	Linux no-preload-589411 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [34900b3fb77684bd3de1f265b714992b9499c98fd614328a74995aa089a0d576] <==
	I1121 14:39:05.572662       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:39:05.572868       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 14:39:05.573016       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:39:05.573037       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:39:05.573065       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:39:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:39:05.772950       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:39:05.773359       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:39:05.773409       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:39:05.773547       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 14:39:35.774358       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1121 14:39:35.774361       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1121 14:39:35.774359       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 14:39:35.774428       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1121 14:39:37.373698       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:39:37.373724       1 metrics.go:72] Registering metrics
	I1121 14:39:37.373797       1 controller.go:711] "Syncing nftables rules"
	I1121 14:39:45.776647       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:39:45.776699       1 main.go:301] handling current node
	I1121 14:39:55.781650       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:39:55.781753       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3168754e97e94add6d3faf44e9b1d45479c54acaeea9af4b8903986d660beead] <==
	I1121 14:39:04.726676       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1121 14:39:04.726785       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1121 14:39:04.726843       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:39:04.727354       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1121 14:39:04.727592       1 aggregator.go:171] initial CRD sync complete...
	I1121 14:39:04.727631       1 autoregister_controller.go:144] Starting autoregister controller
	I1121 14:39:04.727654       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:39:04.727677       1 cache.go:39] Caches are synced for autoregister controller
	E1121 14:39:04.731465       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1121 14:39:04.731611       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 14:39:04.762646       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1121 14:39:04.769874       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1121 14:39:04.769906       1 policy_source.go:240] refreshing policies
	I1121 14:39:04.780688       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:39:05.015172       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 14:39:05.044293       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:39:05.064055       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:39:05.071323       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:39:05.078488       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:39:05.125299       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.122.74"}
	I1121 14:39:05.138946       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.156.249"}
	I1121 14:39:05.629073       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:39:09.103214       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:39:09.455335       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:39:09.552498       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [a6f06b907ba722c73068717b14a501d45b65b1b50b019336f8f72e20f97d4877] <==
	I1121 14:39:09.001864       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 14:39:09.001697       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1121 14:39:09.002026       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 14:39:09.003257       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1121 14:39:09.005467       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 14:39:09.014716       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 14:39:09.051058       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:39:09.051112       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 14:39:09.051143       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 14:39:09.051189       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 14:39:09.051204       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 14:39:09.051446       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 14:39:09.051592       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 14:39:09.051680       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 14:39:09.057916       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:39:09.058998       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1121 14:39:09.059059       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1121 14:39:09.059094       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1121 14:39:09.059103       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1121 14:39:09.059110       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1121 14:39:09.065352       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:39:09.071705       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:39:09.071723       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 14:39:09.071731       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 14:39:09.073809       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [d03597d623b131413ff01082d5cde4837c45ee0cde1f38ed17fcd16f4ca1e79b] <==
	I1121 14:39:05.423805       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:39:05.486132       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:39:05.587027       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:39:05.587056       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 14:39:05.587116       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:39:05.644601       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:39:05.644668       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:39:05.650537       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:39:05.650953       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:39:05.650988       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:39:05.652616       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:39:05.652652       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:39:05.652703       1 config.go:200] "Starting service config controller"
	I1121 14:39:05.652713       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:39:05.652745       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:39:05.652750       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:39:05.653031       1 config.go:309] "Starting node config controller"
	I1121 14:39:05.653091       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:39:05.653103       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:39:05.752837       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:39:05.752850       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:39:05.752866       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c834c6d0d4bb2461edebbdafedf6c4304e33b6167bb90599f1433f57e20da3ae] <==
	I1121 14:39:03.296852       1 serving.go:386] Generated self-signed cert in-memory
	W1121 14:39:04.674160       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1121 14:39:04.674999       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W1121 14:39:04.675026       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1121 14:39:04.675036       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1121 14:39:04.701808       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 14:39:04.701909       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:39:04.704791       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:39:04.704832       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:39:04.705186       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 14:39:04.705348       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 14:39:04.805065       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:39:09 no-preload-589411 kubelet[719]: I1121 14:39:09.601722     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/00c7cb49-8fbf-4ec1-9de5-57b0f563f326-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-hc2j2\" (UID: \"00c7cb49-8fbf-4ec1-9de5-57b0f563f326\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hc2j2"
	Nov 21 14:39:09 no-preload-589411 kubelet[719]: I1121 14:39:09.601816     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgf4g\" (UniqueName: \"kubernetes.io/projected/00c7cb49-8fbf-4ec1-9de5-57b0f563f326-kube-api-access-bgf4g\") pod \"kubernetes-dashboard-855c9754f9-hc2j2\" (UID: \"00c7cb49-8fbf-4ec1-9de5-57b0f563f326\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hc2j2"
	Nov 21 14:39:10 no-preload-589411 kubelet[719]: I1121 14:39:10.969338     719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 21 14:39:13 no-preload-589411 kubelet[719]: I1121 14:39:13.091238     719 scope.go:117] "RemoveContainer" containerID="c0e45e621bdae0bb17e3062e07446f789e44af4d52ace4320edd7cd68ef43478"
	Nov 21 14:39:14 no-preload-589411 kubelet[719]: I1121 14:39:14.098653     719 scope.go:117] "RemoveContainer" containerID="c0e45e621bdae0bb17e3062e07446f789e44af4d52ace4320edd7cd68ef43478"
	Nov 21 14:39:14 no-preload-589411 kubelet[719]: I1121 14:39:14.099163     719 scope.go:117] "RemoveContainer" containerID="03b4b2e7ae7513501020a8cf5ab1076c525e6821801db3a0e7c43fa9cbab9682"
	Nov 21 14:39:14 no-preload-589411 kubelet[719]: E1121 14:39:14.099683     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q5hp6_kubernetes-dashboard(60441beb-0c70-4ba3-9e8a-d744004cc985)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q5hp6" podUID="60441beb-0c70-4ba3-9e8a-d744004cc985"
	Nov 21 14:39:15 no-preload-589411 kubelet[719]: I1121 14:39:15.102506     719 scope.go:117] "RemoveContainer" containerID="03b4b2e7ae7513501020a8cf5ab1076c525e6821801db3a0e7c43fa9cbab9682"
	Nov 21 14:39:15 no-preload-589411 kubelet[719]: E1121 14:39:15.102698     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q5hp6_kubernetes-dashboard(60441beb-0c70-4ba3-9e8a-d744004cc985)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q5hp6" podUID="60441beb-0c70-4ba3-9e8a-d744004cc985"
	Nov 21 14:39:16 no-preload-589411 kubelet[719]: I1121 14:39:16.404273     719 scope.go:117] "RemoveContainer" containerID="03b4b2e7ae7513501020a8cf5ab1076c525e6821801db3a0e7c43fa9cbab9682"
	Nov 21 14:39:16 no-preload-589411 kubelet[719]: E1121 14:39:16.404522     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q5hp6_kubernetes-dashboard(60441beb-0c70-4ba3-9e8a-d744004cc985)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q5hp6" podUID="60441beb-0c70-4ba3-9e8a-d744004cc985"
	Nov 21 14:39:20 no-preload-589411 kubelet[719]: I1121 14:39:20.607604     719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hc2j2" podStartSLOduration=3.7119852250000003 podStartE2EDuration="11.607585748s" podCreationTimestamp="2025-11-21 14:39:09 +0000 UTC" firstStartedPulling="2025-11-21 14:39:09.846183225 +0000 UTC m=+7.900919807" lastFinishedPulling="2025-11-21 14:39:17.74178374 +0000 UTC m=+15.796520330" observedRunningTime="2025-11-21 14:39:18.121422015 +0000 UTC m=+16.176158624" watchObservedRunningTime="2025-11-21 14:39:20.607585748 +0000 UTC m=+18.662322345"
	Nov 21 14:39:29 no-preload-589411 kubelet[719]: I1121 14:39:29.038502     719 scope.go:117] "RemoveContainer" containerID="03b4b2e7ae7513501020a8cf5ab1076c525e6821801db3a0e7c43fa9cbab9682"
	Nov 21 14:39:29 no-preload-589411 kubelet[719]: I1121 14:39:29.137646     719 scope.go:117] "RemoveContainer" containerID="03b4b2e7ae7513501020a8cf5ab1076c525e6821801db3a0e7c43fa9cbab9682"
	Nov 21 14:39:29 no-preload-589411 kubelet[719]: I1121 14:39:29.137913     719 scope.go:117] "RemoveContainer" containerID="6aeef65086fdad560a7dbaf32d497a70041b1fcc99047c64fe71950c6ba3d738"
	Nov 21 14:39:29 no-preload-589411 kubelet[719]: E1121 14:39:29.138094     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q5hp6_kubernetes-dashboard(60441beb-0c70-4ba3-9e8a-d744004cc985)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q5hp6" podUID="60441beb-0c70-4ba3-9e8a-d744004cc985"
	Nov 21 14:39:36 no-preload-589411 kubelet[719]: I1121 14:39:36.158005     719 scope.go:117] "RemoveContainer" containerID="f18e8753eb0e613d6889e318b6e9ed46a29e321d61c64f6961a151ca05dfc2d3"
	Nov 21 14:39:36 no-preload-589411 kubelet[719]: I1121 14:39:36.404823     719 scope.go:117] "RemoveContainer" containerID="6aeef65086fdad560a7dbaf32d497a70041b1fcc99047c64fe71950c6ba3d738"
	Nov 21 14:39:36 no-preload-589411 kubelet[719]: E1121 14:39:36.405025     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q5hp6_kubernetes-dashboard(60441beb-0c70-4ba3-9e8a-d744004cc985)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q5hp6" podUID="60441beb-0c70-4ba3-9e8a-d744004cc985"
	Nov 21 14:39:49 no-preload-589411 kubelet[719]: I1121 14:39:49.038968     719 scope.go:117] "RemoveContainer" containerID="6aeef65086fdad560a7dbaf32d497a70041b1fcc99047c64fe71950c6ba3d738"
	Nov 21 14:39:49 no-preload-589411 kubelet[719]: E1121 14:39:49.039141     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q5hp6_kubernetes-dashboard(60441beb-0c70-4ba3-9e8a-d744004cc985)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q5hp6" podUID="60441beb-0c70-4ba3-9e8a-d744004cc985"
	Nov 21 14:39:54 no-preload-589411 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 14:39:54 no-preload-589411 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 14:39:54 no-preload-589411 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 21 14:39:54 no-preload-589411 systemd[1]: kubelet.service: Consumed 1.502s CPU time.
	
	
	==> kubernetes-dashboard [a5ce31187d2431d019c4a829d693487a4499b5874adf7bc5244c94414888a880] <==
	2025/11/21 14:39:17 Using namespace: kubernetes-dashboard
	2025/11/21 14:39:17 Using in-cluster config to connect to apiserver
	2025/11/21 14:39:17 Using secret token for csrf signing
	2025/11/21 14:39:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/21 14:39:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/21 14:39:17 Successful initial request to the apiserver, version: v1.34.1
	2025/11/21 14:39:17 Generating JWE encryption key
	2025/11/21 14:39:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/21 14:39:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/21 14:39:17 Initializing JWE encryption key from synchronized object
	2025/11/21 14:39:17 Creating in-cluster Sidecar client
	2025/11/21 14:39:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 14:39:17 Serving insecurely on HTTP port: 9090
	2025/11/21 14:39:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 14:39:17 Starting overwatch
	
	
	==> storage-provisioner [a6c5bc2e5adccf877569ec8c359d7f1cc50809152c4db1a39a7188aac936ef93] <==
	I1121 14:39:36.217226       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:39:36.223981       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:39:36.224014       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 14:39:36.225783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:39.680335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:43.940671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:47.538984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:50.592918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:53.615552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:53.624401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:39:53.624612       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:39:53.624749       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f02b2028-b558-4c82-b860-22cca0fa7d7b", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-589411_8487a500-cdf9-42af-9fdd-aee5fb04050e became leader
	I1121 14:39:53.624772       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-589411_8487a500-cdf9-42af-9fdd-aee5fb04050e!
	W1121 14:39:53.627889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:53.633956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:39:53.725246       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-589411_8487a500-cdf9-42af-9fdd-aee5fb04050e!
	W1121 14:39:55.637356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:55.641498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:57.645787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:57.652542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f18e8753eb0e613d6889e318b6e9ed46a29e321d61c64f6961a151ca05dfc2d3] <==
	I1121 14:39:05.405004       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1121 14:39:35.408945       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-589411 -n no-preload-589411
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-589411 -n no-preload-589411: exit status 2 (320.536661ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-589411 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-589411
helpers_test.go:243: (dbg) docker inspect no-preload-589411:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ba122d6d7a1ce2d4fd919d9499f2c4fb5076fc5a90cce1344ec358d2e5bfc45",
	        "Created": "2025-11-21T14:37:40.849517293Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 251917,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:38:55.650850299Z",
	            "FinishedAt": "2025-11-21T14:38:54.682608242Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/2ba122d6d7a1ce2d4fd919d9499f2c4fb5076fc5a90cce1344ec358d2e5bfc45/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ba122d6d7a1ce2d4fd919d9499f2c4fb5076fc5a90cce1344ec358d2e5bfc45/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ba122d6d7a1ce2d4fd919d9499f2c4fb5076fc5a90cce1344ec358d2e5bfc45/hosts",
	        "LogPath": "/var/lib/docker/containers/2ba122d6d7a1ce2d4fd919d9499f2c4fb5076fc5a90cce1344ec358d2e5bfc45/2ba122d6d7a1ce2d4fd919d9499f2c4fb5076fc5a90cce1344ec358d2e5bfc45-json.log",
	        "Name": "/no-preload-589411",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-589411:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-589411",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2ba122d6d7a1ce2d4fd919d9499f2c4fb5076fc5a90cce1344ec358d2e5bfc45",
	                "LowerDir": "/var/lib/docker/overlay2/5b439a772b1bafc04ec7400efb1953394a63935256474aa83fdd49a49549b264-init/diff:/var/lib/docker/overlay2/52a35d389e2d282a009a54c52e3f2c3f22a4d7ab5a2644a29d16127d21682576/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b439a772b1bafc04ec7400efb1953394a63935256474aa83fdd49a49549b264/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b439a772b1bafc04ec7400efb1953394a63935256474aa83fdd49a49549b264/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b439a772b1bafc04ec7400efb1953394a63935256474aa83fdd49a49549b264/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-589411",
	                "Source": "/var/lib/docker/volumes/no-preload-589411/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-589411",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-589411",
	                "name.minikube.sigs.k8s.io": "no-preload-589411",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "450affadb9146c11a9249b5d32dcb199f98ff92e2191e1f2bd37f92de37d70b0",
	            "SandboxKey": "/var/run/docker/netns/450affadb914",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-589411": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "16216427221de4c7c427a254dcd5d0745c57cde4857ab5c433751b20e1dda883",
	                    "EndpointID": "1ab410781c6734ac1b1e596db10964d19ad351d7d723d7a658d45a0d93a8c334",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "56:13:95:9e:0f:69",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-589411",
	                        "2ba122d6d7a1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-589411 -n no-preload-589411
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-589411 -n no-preload-589411: exit status 2 (308.159313ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-589411 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-589411 logs -n 25: (1.150170467s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p pause-738756 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-738756              │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:37 UTC │
	│ pause   │ -p pause-738756 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-738756              │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │                     │
	│ delete  │ -p pause-738756                                                                                                                                                                                                                               │ pause-738756              │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:37 UTC │
	│ start   │ -p no-preload-589411 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589411         │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:38 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-794941 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │                     │
	│ stop    │ -p old-k8s-version-794941 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:37 UTC │ 21 Nov 25 14:38 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-794941 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ start   │ -p old-k8s-version-794941 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ addons  │ enable metrics-server -p no-preload-589411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-589411         │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │                     │
	│ stop    │ -p no-preload-589411 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-589411         │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ image   │ old-k8s-version-794941 image list --format=json                                                                                                                                                                                               │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ pause   │ -p old-k8s-version-794941 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-589411 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-589411         │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ start   │ -p no-preload-589411 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589411         │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:39 UTC │
	│ delete  │ -p old-k8s-version-794941                                                                                                                                                                                                                     │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:39 UTC │
	│ delete  │ -p old-k8s-version-794941                                                                                                                                                                                                                     │ old-k8s-version-794941    │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:39 UTC │
	│ start   │ -p embed-certs-441390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-441390        │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:39 UTC │
	│ start   │ -p kubernetes-upgrade-214044 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-214044 │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ start   │ -p kubernetes-upgrade-214044 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-214044 │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:39 UTC │
	│ addons  │ enable metrics-server -p embed-certs-441390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-441390        │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ image   │ no-preload-589411 image list --format=json                                                                                                                                                                                                    │ no-preload-589411         │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:39 UTC │
	│ pause   │ -p no-preload-589411 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-589411         │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ stop    │ -p embed-certs-441390 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-441390        │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ start   │ -p cert-expiration-046125 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-046125    │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-214044                                                                                                                                                                                                                  │ kubernetes-upgrade-214044 │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:39:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:39:57.112847  264609 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:39:57.113069  264609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:39:57.113072  264609 out.go:374] Setting ErrFile to fd 2...
	I1121 14:39:57.113075  264609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:39:57.113244  264609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:39:57.113671  264609 out.go:368] Setting JSON to false
	I1121 14:39:57.114851  264609 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4946,"bootTime":1763731051,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:39:57.114941  264609 start.go:143] virtualization: kvm guest
	I1121 14:39:57.118263  264609 out.go:179] * [cert-expiration-046125] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:39:57.120521  264609 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:39:57.120648  264609 notify.go:221] Checking for updates...
	I1121 14:39:57.123979  264609 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:39:57.125601  264609 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:39:57.127065  264609 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 14:39:57.128243  264609 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:39:57.129439  264609 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:39:57.132543  264609 config.go:182] Loaded profile config "cert-expiration-046125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:39:57.133281  264609 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:39:57.162197  264609 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:39:57.162419  264609 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:39:57.236839  264609 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:84 OomKillDisable:false NGoroutines:94 SystemTime:2025-11-21 14:39:57.224642439 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:39:57.236976  264609 docker.go:319] overlay module found
	I1121 14:39:57.239175  264609 out.go:179] * Using the docker driver based on existing profile
	I1121 14:39:57.240306  264609 start.go:309] selected driver: docker
	I1121 14:39:57.240316  264609 start.go:930] validating driver "docker" against &{Name:cert-expiration-046125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-046125 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:39:57.240402  264609 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:39:57.241251  264609 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:39:57.309943  264609 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:84 OomKillDisable:false NGoroutines:94 SystemTime:2025-11-21 14:39:57.299467047 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:39:57.310273  264609 cni.go:84] Creating CNI manager for ""
	I1121 14:39:57.310337  264609 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:39:57.310381  264609 start.go:353] cluster config:
	{Name:cert-expiration-046125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-046125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1121 14:39:57.312015  264609 out.go:179] * Starting "cert-expiration-046125" primary control-plane node in "cert-expiration-046125" cluster
	I1121 14:39:57.313816  264609 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:39:57.315167  264609 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:39:57.316156  264609 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:39:57.316184  264609 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1121 14:39:57.316206  264609 cache.go:65] Caching tarball of preloaded images
	I1121 14:39:57.316282  264609 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:39:57.316296  264609 preload.go:238] Found /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1121 14:39:57.316307  264609 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 14:39:57.316420  264609 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/cert-expiration-046125/config.json ...
	I1121 14:39:57.338114  264609 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:39:57.338127  264609 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:39:57.338158  264609 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:39:57.338187  264609 start.go:360] acquireMachinesLock for cert-expiration-046125: {Name:mk298a8e6e0ef49ec4a32cb23540189ade410f47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:39:57.338286  264609 start.go:364] duration metric: took 72.908µs to acquireMachinesLock for "cert-expiration-046125"
	I1121 14:39:57.338305  264609 start.go:96] Skipping create...Using existing machine configuration
	I1121 14:39:57.338310  264609 fix.go:54] fixHost starting: 
	I1121 14:39:57.338613  264609 cli_runner.go:164] Run: docker container inspect cert-expiration-046125 --format={{.State.Status}}
	I1121 14:39:57.359297  264609 fix.go:112] recreateIfNeeded on cert-expiration-046125: state=Running err=<nil>
	W1121 14:39:57.359312  264609 fix.go:138] unexpected machine state, will restart: <nil>
	I1121 14:39:56.816961  261994 addons.go:239] Setting addon default-storageclass=true in "kubernetes-upgrade-214044"
	W1121 14:39:56.816978  261994 addons.go:248] addon default-storageclass should already be in state true
	I1121 14:39:56.817010  261994 host.go:66] Checking if "kubernetes-upgrade-214044" exists ...
	I1121 14:39:56.817550  261994 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-214044 --format={{.State.Status}}
	I1121 14:39:56.818803  261994 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:39:56.818825  261994 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:39:56.818878  261994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-214044
	I1121 14:39:56.851147  261994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/kubernetes-upgrade-214044/id_rsa Username:docker}
	I1121 14:39:56.855021  261994 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:39:56.855050  261994 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:39:56.855100  261994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-214044
	I1121 14:39:56.890802  261994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/kubernetes-upgrade-214044/id_rsa Username:docker}
	I1121 14:39:56.944710  261994 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:39:56.958838  261994 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:39:56.958900  261994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:39:56.970209  261994 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:39:56.970638  261994 api_server.go:72] duration metric: took 183.659131ms to wait for apiserver process to appear ...
	I1121 14:39:56.970666  261994 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:39:56.970724  261994 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1121 14:39:56.976333  261994 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1121 14:39:56.983745  261994 api_server.go:141] control plane version: v1.34.1
	I1121 14:39:56.983769  261994 api_server.go:131] duration metric: took 13.095015ms to wait for apiserver health ...
	I1121 14:39:56.983781  261994 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:39:56.987454  261994 system_pods.go:59] 9 kube-system pods found
	I1121 14:39:56.987478  261994 system_pods.go:61] "coredns-66bc5c9577-ghrnc" [ed199c43-c1c4-4daf-a241-3a2fa70291ee] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1121 14:39:56.987484  261994 system_pods.go:61] "coredns-66bc5c9577-z4xrd" [ad2cad82-9f07-4a95-8022-711ad30cd015] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1121 14:39:56.987505  261994 system_pods.go:61] "etcd-kubernetes-upgrade-214044" [eb017f8c-d0f0-45e2-9d94-aa4506c9dc27] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 14:39:56.987511  261994 system_pods.go:61] "kindnet-6rvkz" [2bafc8af-f8ef-4ca1-ba0e-0f56657afa6d] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1121 14:39:56.987516  261994 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-214044" [146b0502-8be8-471c-877a-9c62b8a98854] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 14:39:56.987525  261994 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-214044" [ab123c5c-f424-493c-9a63-1c81bdfee7ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 14:39:56.987530  261994 system_pods.go:61] "kube-proxy-9znqn" [81dd7392-85ad-4950-a7c7-abbfe478ed9a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1121 14:39:56.987535  261994 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-214044" [642a2244-2194-4877-8fdd-c0e4d7ad36e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 14:39:56.987539  261994 system_pods.go:61] "storage-provisioner" [09207a10-c3ab-45fa-bc6a-a81899344c40] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1121 14:39:56.987544  261994 system_pods.go:74] duration metric: took 3.757454ms to wait for pod list to return data ...
	I1121 14:39:56.987553  261994 kubeadm.go:587] duration metric: took 200.590065ms to wait for: map[apiserver:true system_pods:true]
	I1121 14:39:56.987591  261994 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:39:56.990141  261994 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:39:56.990167  261994 node_conditions.go:123] node cpu capacity is 8
	I1121 14:39:56.990179  261994 node_conditions.go:105] duration metric: took 2.583788ms to run NodePressure ...
	I1121 14:39:56.990193  261994 start.go:242] waiting for startup goroutines ...
	I1121 14:39:57.000337  261994 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:39:57.523224  261994 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:39:57.524627  261994 addons.go:530] duration metric: took 737.306187ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:39:57.524668  261994 start.go:247] waiting for cluster config update ...
	I1121 14:39:57.524682  261994 start.go:256] writing updated cluster config ...
	I1121 14:39:57.524903  261994 ssh_runner.go:195] Run: rm -f paused
	I1121 14:39:57.578581  261994 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:39:57.580405  261994 out.go:179] * Done! kubectl is now configured to use "kubernetes-upgrade-214044" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 14:39:36 no-preload-589411 crio[566]: time="2025-11-21T14:39:36.159226644Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f4c1a60e-d116-4c51-a0fb-fecc24735b13 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:39:36 no-preload-589411 crio[566]: time="2025-11-21T14:39:36.160226777Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d51a8c32-acbf-4ddd-84c0-43173d3917da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:39:36 no-preload-589411 crio[566]: time="2025-11-21T14:39:36.160360589Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:39:36 no-preload-589411 crio[566]: time="2025-11-21T14:39:36.165005342Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:39:36 no-preload-589411 crio[566]: time="2025-11-21T14:39:36.165135707Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c752e60adc3af0ad327883bd6c48e1cb1055ca6c4b9c8612155791da9ec211e4/merged/etc/passwd: no such file or directory"
	Nov 21 14:39:36 no-preload-589411 crio[566]: time="2025-11-21T14:39:36.165157465Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c752e60adc3af0ad327883bd6c48e1cb1055ca6c4b9c8612155791da9ec211e4/merged/etc/group: no such file or directory"
	Nov 21 14:39:36 no-preload-589411 crio[566]: time="2025-11-21T14:39:36.165356075Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:39:36 no-preload-589411 crio[566]: time="2025-11-21T14:39:36.201665977Z" level=info msg="Created container a6c5bc2e5adccf877569ec8c359d7f1cc50809152c4db1a39a7188aac936ef93: kube-system/storage-provisioner/storage-provisioner" id=d51a8c32-acbf-4ddd-84c0-43173d3917da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:39:36 no-preload-589411 crio[566]: time="2025-11-21T14:39:36.202196192Z" level=info msg="Starting container: a6c5bc2e5adccf877569ec8c359d7f1cc50809152c4db1a39a7188aac936ef93" id=b52968c2-bf58-4b82-8ebc-1518b0ef6e9c name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:39:36 no-preload-589411 crio[566]: time="2025-11-21T14:39:36.203828841Z" level=info msg="Started container" PID=1705 containerID=a6c5bc2e5adccf877569ec8c359d7f1cc50809152c4db1a39a7188aac936ef93 description=kube-system/storage-provisioner/storage-provisioner id=b52968c2-bf58-4b82-8ebc-1518b0ef6e9c name=/runtime.v1.RuntimeService/StartContainer sandboxID=74aa257caa83ddd55787a593a0270be3a259066e68c835f47234eb50660fa0c7
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.776934744Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.78099093Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.781011745Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.781028523Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.784483417Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.784504006Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.784519973Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.788134635Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.788168482Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.788183199Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.79133425Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.791351905Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.791368932Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.794585117Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:39:45 no-preload-589411 crio[566]: time="2025-11-21T14:39:45.794606858Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a6c5bc2e5adcc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   74aa257caa83d       storage-provisioner                          kube-system
	6aeef65086fda       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago      Exited              dashboard-metrics-scraper   2                   97041785d9272       dashboard-metrics-scraper-6ffb444bf9-q5hp6   kubernetes-dashboard
	a5ce31187d243       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   fe22143f1216f       kubernetes-dashboard-855c9754f9-hc2j2        kubernetes-dashboard
	ef511d5cab6bd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   1dbe83d30878a       coredns-66bc5c9577-db94z                     kube-system
	b7517f4fdccc7       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   e6cc1b466f00d       busybox                                      default
	f18e8753eb0e6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   74aa257caa83d       storage-provisioner                          kube-system
	d03597d623b13       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   2f237ed59bff7       kube-proxy-qhp5d                             kube-system
	34900b3fb7768       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   0e6e5fd93068b       kindnet-h7k2r                                kube-system
	c834c6d0d4bb2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   6e275ab09475a       kube-scheduler-no-preload-589411             kube-system
	a6f06b907ba72       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   c3a4b02bf6f51       kube-controller-manager-no-preload-589411    kube-system
	3168754e97e94       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   9422674c00b7e       kube-apiserver-no-preload-589411             kube-system
	c243e5d767317       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   c2089a495f230       etcd-no-preload-589411                       kube-system
	
	
	==> coredns [ef511d5cab6bd6a19210f4020240515d2d470bc2d5e76d031d3fb82a1b0f13e5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:48817 - 21556 "HINFO IN 1772689697979126168.352344727929161197. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.115018342s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-589411
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-589411
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=no-preload-589411
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_38_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:38:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-589411
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:39:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:39:35 +0000   Fri, 21 Nov 2025 14:38:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:39:35 +0000   Fri, 21 Nov 2025 14:38:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:39:35 +0000   Fri, 21 Nov 2025 14:38:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:39:35 +0000   Fri, 21 Nov 2025 14:38:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-589411
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                8c0c3626-fd96-4939-aead-166c796faa08
	  Boot ID:                    4deed74e-d1ae-403f-b303-7338c681df31
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-db94z                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-no-preload-589411                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-h7k2r                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-no-preload-589411              250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-589411     200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-qhp5d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-no-preload-589411              100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-q5hp6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hc2j2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 53s                  kube-proxy       
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s (x8 over 117s)  kubelet          Node no-preload-589411 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x8 over 117s)  kubelet          Node no-preload-589411 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x8 over 117s)  kubelet          Node no-preload-589411 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    113s                 kubelet          Node no-preload-589411 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  113s                 kubelet          Node no-preload-589411 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     113s                 kubelet          Node no-preload-589411 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s                 node-controller  Node no-preload-589411 event: Registered Node no-preload-589411 in Controller
	  Normal  NodeReady                94s                  kubelet          Node no-preload-589411 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node no-preload-589411 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node no-preload-589411 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node no-preload-589411 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                  node-controller  Node no-preload-589411 event: Registered Node no-preload-589411 in Controller
	
	
	==> dmesg <==
	[  +0.087005] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024870] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.407014] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 13:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.052324] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023964] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +2.047715] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +4.031557] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +8.511113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[ +16.382337] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[Nov21 13:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	
	
	==> etcd [c243e5d767317fb718a9f77aecb00ce5ae279ec984417532538cae290c317a30] <==
	{"level":"info","ts":"2025-11-21T14:39:05.924121Z","caller":"traceutil/trace.go:172","msg":"trace[1351447124] range","detail":"{range_begin:/registry/clusterroles/system:heapster; range_end:; response_count:1; response_revision:503; }","duration":"140.961208ms","start":"2025-11-21T14:39:05.783152Z","end":"2025-11-21T14:39:05.924113Z","steps":["trace[1351447124] 'agreement among raft nodes before linearized reading'  (duration: 140.82113ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T14:39:05.924122Z","caller":"traceutil/trace.go:172","msg":"trace[944089475] transaction","detail":"{read_only:false; response_revision:504; number_of_response:1; }","duration":"143.276227ms","start":"2025-11-21T14:39:05.780829Z","end":"2025-11-21T14:39:05.924106Z","steps":["trace[944089475] 'process raft request'  (duration: 143.148476ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-21T14:39:06.137028Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.694739ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:attachdetach-controller\" limit:1 ","response":"range_response_count:1 size:767"}
	{"level":"info","ts":"2025-11-21T14:39:06.137099Z","caller":"traceutil/trace.go:172","msg":"trace[1052132736] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:attachdetach-controller; range_end:; response_count:1; response_revision:504; }","duration":"113.776607ms","start":"2025-11-21T14:39:06.023308Z","end":"2025-11-21T14:39:06.137085Z","steps":["trace[1052132736] 'range keys from in-memory index tree'  (duration: 113.522693ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T14:39:06.295901Z","caller":"traceutil/trace.go:172","msg":"trace[912518461] linearizableReadLoop","detail":"{readStateIndex:532; appliedIndex:532; }","duration":"131.742906ms","start":"2025-11-21T14:39:06.164134Z","end":"2025-11-21T14:39:06.295877Z","steps":["trace[912518461] 'read index received'  (duration: 131.735722ms)","trace[912518461] 'applied index is now lower than readState.Index'  (duration: 6.465µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-21T14:39:06.296038Z","caller":"traceutil/trace.go:172","msg":"trace[1716672919] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"132.530401ms","start":"2025-11-21T14:39:06.163497Z","end":"2025-11-21T14:39:06.296027Z","steps":["trace[1716672919] 'process raft request'  (duration: 132.412671ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-21T14:39:06.296063Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.903764ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:resource-claim-controller\" limit:1 ","response":"range_response_count:1 size:775"}
	{"level":"info","ts":"2025-11-21T14:39:06.296103Z","caller":"traceutil/trace.go:172","msg":"trace[1807869704] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:resource-claim-controller; range_end:; response_count:1; response_revision:505; }","duration":"131.966297ms","start":"2025-11-21T14:39:06.164127Z","end":"2025-11-21T14:39:06.296093Z","steps":["trace[1807869704] 'agreement among raft nodes before linearized reading'  (duration: 131.808312ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T14:39:06.556731Z","caller":"traceutil/trace.go:172","msg":"trace[887045539] linearizableReadLoop","detail":"{readStateIndex:533; appliedIndex:533; }","duration":"252.052984ms","start":"2025-11-21T14:39:06.304640Z","end":"2025-11-21T14:39:06.556693Z","steps":["trace[887045539] 'read index received'  (duration: 252.045727ms)","trace[887045539] 'applied index is now lower than readState.Index'  (duration: 6.419µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T14:39:06.687640Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"382.972806ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:persistent-volume-binder\" limit:1 ","response":"range_response_count:1 size:771"}
	{"level":"info","ts":"2025-11-21T14:39:06.687705Z","caller":"traceutil/trace.go:172","msg":"trace[1877397099] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:persistent-volume-binder; range_end:; response_count:1; response_revision:506; }","duration":"383.053395ms","start":"2025-11-21T14:39:06.304634Z","end":"2025-11-21T14:39:06.687688Z","steps":["trace[1877397099] 'agreement among raft nodes before linearized reading'  (duration: 252.149749ms)","trace[1877397099] 'range keys from in-memory index tree'  (duration: 130.736931ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T14:39:06.687746Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-21T14:39:06.304627Z","time spent":"383.106663ms","remote":"127.0.0.1:53036","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":1,"response size":794,"request content":"key:\"/registry/clusterrolebindings/system:controller:persistent-volume-binder\" limit:1 "}
	{"level":"warn","ts":"2025-11-21T14:39:06.688111Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.923941ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597211987367770 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-qhp5d\" mod_revision:494 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-qhp5d\" value_size:4977 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-qhp5d\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-21T14:39:06.688179Z","caller":"traceutil/trace.go:172","msg":"trace[517875784] transaction","detail":"{read_only:false; response_revision:507; number_of_response:1; }","duration":"384.216912ms","start":"2025-11-21T14:39:06.303950Z","end":"2025-11-21T14:39:06.688167Z","steps":["trace[517875784] 'process raft request'  (duration: 252.78005ms)","trace[517875784] 'compare'  (duration: 130.810023ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T14:39:06.688250Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-21T14:39:06.303935Z","time spent":"384.270575ms","remote":"127.0.0.1:52684","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5028,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-qhp5d\" mod_revision:494 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-qhp5d\" value_size:4977 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-qhp5d\" > >"}
	{"level":"info","ts":"2025-11-21T14:39:06.837628Z","caller":"traceutil/trace.go:172","msg":"trace[1831644993] linearizableReadLoop","detail":"{readStateIndex:534; appliedIndex:534; }","duration":"139.564832ms","start":"2025-11-21T14:39:06.698044Z","end":"2025-11-21T14:39:06.837609Z","steps":["trace[1831644993] 'read index received'  (duration: 139.558727ms)","trace[1831644993] 'applied index is now lower than readState.Index'  (duration: 5.362µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T14:39:06.985159Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"287.085092ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:service-controller\" limit:1 ","response":"range_response_count:1 size:747"}
	{"level":"info","ts":"2025-11-21T14:39:06.985222Z","caller":"traceutil/trace.go:172","msg":"trace[384454277] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:service-controller; range_end:; response_count:1; response_revision:507; }","duration":"287.16716ms","start":"2025-11-21T14:39:06.698042Z","end":"2025-11-21T14:39:06.985209Z","steps":["trace[384454277] 'agreement among raft nodes before linearized reading'  (duration: 139.659856ms)","trace[384454277] 'range keys from in-memory index tree'  (duration: 147.320383ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T14:39:06.985254Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"147.482593ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597211987367779 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-db94z\" mod_revision:497 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-db94z\" value_size:5859 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-db94z\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-21T14:39:06.985333Z","caller":"traceutil/trace.go:172","msg":"trace[531759135] linearizableReadLoop","detail":"{readStateIndex:535; appliedIndex:534; }","duration":"147.645761ms","start":"2025-11-21T14:39:06.837677Z","end":"2025-11-21T14:39:06.985323Z","steps":["trace[531759135] 'read index received'  (duration: 40.116µs)","trace[531759135] 'applied index is now lower than readState.Index'  (duration: 147.604818ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-21T14:39:06.985359Z","caller":"traceutil/trace.go:172","msg":"trace[682881008] transaction","detail":"{read_only:false; response_revision:508; number_of_response:1; }","duration":"287.891529ms","start":"2025-11-21T14:39:06.697443Z","end":"2025-11-21T14:39:06.985335Z","steps":["trace[682881008] 'process raft request'  (duration: 140.26124ms)","trace[682881008] 'compare'  (duration: 147.362549ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T14:39:06.985398Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"252.716272ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-21T14:39:06.985419Z","caller":"traceutil/trace.go:172","msg":"trace[2142754160] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:508; }","duration":"252.737936ms","start":"2025-11-21T14:39:06.732674Z","end":"2025-11-21T14:39:06.985412Z","steps":["trace[2142754160] 'agreement among raft nodes before linearized reading'  (duration: 252.683781ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-21T14:39:07.315817Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.348736ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.85.2\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-21T14:39:07.315870Z","caller":"traceutil/trace.go:172","msg":"trace[694863912] range","detail":"{range_begin:/registry/masterleases/192.168.85.2; range_end:; response_count:0; response_revision:509; }","duration":"182.416868ms","start":"2025-11-21T14:39:07.133441Z","end":"2025-11-21T14:39:07.315858Z","steps":["trace[694863912] 'range keys from in-memory index tree'  (duration: 182.286039ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:39:59 up  1:22,  0 user,  load average: 2.71, 2.47, 1.70
	Linux no-preload-589411 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [34900b3fb77684bd3de1f265b714992b9499c98fd614328a74995aa089a0d576] <==
	I1121 14:39:05.572662       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:39:05.572868       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 14:39:05.573016       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:39:05.573037       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:39:05.573065       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:39:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:39:05.772950       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:39:05.773359       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:39:05.773409       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:39:05.773547       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 14:39:35.774358       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1121 14:39:35.774361       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1121 14:39:35.774359       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 14:39:35.774428       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1121 14:39:37.373698       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:39:37.373724       1 metrics.go:72] Registering metrics
	I1121 14:39:37.373797       1 controller.go:711] "Syncing nftables rules"
	I1121 14:39:45.776647       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:39:45.776699       1 main.go:301] handling current node
	I1121 14:39:55.781650       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1121 14:39:55.781753       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3168754e97e94add6d3faf44e9b1d45479c54acaeea9af4b8903986d660beead] <==
	I1121 14:39:04.726676       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1121 14:39:04.726785       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1121 14:39:04.726843       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:39:04.727354       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1121 14:39:04.727592       1 aggregator.go:171] initial CRD sync complete...
	I1121 14:39:04.727631       1 autoregister_controller.go:144] Starting autoregister controller
	I1121 14:39:04.727654       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:39:04.727677       1 cache.go:39] Caches are synced for autoregister controller
	E1121 14:39:04.731465       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1121 14:39:04.731611       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 14:39:04.762646       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1121 14:39:04.769874       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1121 14:39:04.769906       1 policy_source.go:240] refreshing policies
	I1121 14:39:04.780688       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:39:05.015172       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 14:39:05.044293       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:39:05.064055       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:39:05.071323       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:39:05.078488       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:39:05.125299       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.122.74"}
	I1121 14:39:05.138946       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.156.249"}
	I1121 14:39:05.629073       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:39:09.103214       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:39:09.455335       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:39:09.552498       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [a6f06b907ba722c73068717b14a501d45b65b1b50b019336f8f72e20f97d4877] <==
	I1121 14:39:09.001864       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 14:39:09.001697       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1121 14:39:09.002026       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 14:39:09.003257       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1121 14:39:09.005467       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 14:39:09.014716       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 14:39:09.051058       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:39:09.051112       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 14:39:09.051143       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 14:39:09.051189       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 14:39:09.051204       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 14:39:09.051446       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 14:39:09.051592       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 14:39:09.051680       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 14:39:09.057916       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:39:09.058998       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1121 14:39:09.059059       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1121 14:39:09.059094       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1121 14:39:09.059103       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1121 14:39:09.059110       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1121 14:39:09.065352       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:39:09.071705       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:39:09.071723       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 14:39:09.071731       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 14:39:09.073809       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [d03597d623b131413ff01082d5cde4837c45ee0cde1f38ed17fcd16f4ca1e79b] <==
	I1121 14:39:05.423805       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:39:05.486132       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:39:05.587027       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:39:05.587056       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 14:39:05.587116       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:39:05.644601       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:39:05.644668       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:39:05.650537       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:39:05.650953       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:39:05.650988       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:39:05.652616       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:39:05.652652       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:39:05.652703       1 config.go:200] "Starting service config controller"
	I1121 14:39:05.652713       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:39:05.652745       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:39:05.652750       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:39:05.653031       1 config.go:309] "Starting node config controller"
	I1121 14:39:05.653091       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:39:05.653103       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:39:05.752837       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:39:05.752850       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:39:05.752866       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c834c6d0d4bb2461edebbdafedf6c4304e33b6167bb90599f1433f57e20da3ae] <==
	I1121 14:39:03.296852       1 serving.go:386] Generated self-signed cert in-memory
	W1121 14:39:04.674160       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1121 14:39:04.674999       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W1121 14:39:04.675026       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1121 14:39:04.675036       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1121 14:39:04.701808       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 14:39:04.701909       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:39:04.704791       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:39:04.704832       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:39:04.705186       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 14:39:04.705348       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 14:39:04.805065       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:39:09 no-preload-589411 kubelet[719]: I1121 14:39:09.601722     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/00c7cb49-8fbf-4ec1-9de5-57b0f563f326-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-hc2j2\" (UID: \"00c7cb49-8fbf-4ec1-9de5-57b0f563f326\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hc2j2"
	Nov 21 14:39:09 no-preload-589411 kubelet[719]: I1121 14:39:09.601816     719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgf4g\" (UniqueName: \"kubernetes.io/projected/00c7cb49-8fbf-4ec1-9de5-57b0f563f326-kube-api-access-bgf4g\") pod \"kubernetes-dashboard-855c9754f9-hc2j2\" (UID: \"00c7cb49-8fbf-4ec1-9de5-57b0f563f326\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hc2j2"
	Nov 21 14:39:10 no-preload-589411 kubelet[719]: I1121 14:39:10.969338     719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 21 14:39:13 no-preload-589411 kubelet[719]: I1121 14:39:13.091238     719 scope.go:117] "RemoveContainer" containerID="c0e45e621bdae0bb17e3062e07446f789e44af4d52ace4320edd7cd68ef43478"
	Nov 21 14:39:14 no-preload-589411 kubelet[719]: I1121 14:39:14.098653     719 scope.go:117] "RemoveContainer" containerID="c0e45e621bdae0bb17e3062e07446f789e44af4d52ace4320edd7cd68ef43478"
	Nov 21 14:39:14 no-preload-589411 kubelet[719]: I1121 14:39:14.099163     719 scope.go:117] "RemoveContainer" containerID="03b4b2e7ae7513501020a8cf5ab1076c525e6821801db3a0e7c43fa9cbab9682"
	Nov 21 14:39:14 no-preload-589411 kubelet[719]: E1121 14:39:14.099683     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q5hp6_kubernetes-dashboard(60441beb-0c70-4ba3-9e8a-d744004cc985)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q5hp6" podUID="60441beb-0c70-4ba3-9e8a-d744004cc985"
	Nov 21 14:39:15 no-preload-589411 kubelet[719]: I1121 14:39:15.102506     719 scope.go:117] "RemoveContainer" containerID="03b4b2e7ae7513501020a8cf5ab1076c525e6821801db3a0e7c43fa9cbab9682"
	Nov 21 14:39:15 no-preload-589411 kubelet[719]: E1121 14:39:15.102698     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q5hp6_kubernetes-dashboard(60441beb-0c70-4ba3-9e8a-d744004cc985)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q5hp6" podUID="60441beb-0c70-4ba3-9e8a-d744004cc985"
	Nov 21 14:39:16 no-preload-589411 kubelet[719]: I1121 14:39:16.404273     719 scope.go:117] "RemoveContainer" containerID="03b4b2e7ae7513501020a8cf5ab1076c525e6821801db3a0e7c43fa9cbab9682"
	Nov 21 14:39:16 no-preload-589411 kubelet[719]: E1121 14:39:16.404522     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q5hp6_kubernetes-dashboard(60441beb-0c70-4ba3-9e8a-d744004cc985)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q5hp6" podUID="60441beb-0c70-4ba3-9e8a-d744004cc985"
	Nov 21 14:39:20 no-preload-589411 kubelet[719]: I1121 14:39:20.607604     719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hc2j2" podStartSLOduration=3.7119852250000003 podStartE2EDuration="11.607585748s" podCreationTimestamp="2025-11-21 14:39:09 +0000 UTC" firstStartedPulling="2025-11-21 14:39:09.846183225 +0000 UTC m=+7.900919807" lastFinishedPulling="2025-11-21 14:39:17.74178374 +0000 UTC m=+15.796520330" observedRunningTime="2025-11-21 14:39:18.121422015 +0000 UTC m=+16.176158624" watchObservedRunningTime="2025-11-21 14:39:20.607585748 +0000 UTC m=+18.662322345"
	Nov 21 14:39:29 no-preload-589411 kubelet[719]: I1121 14:39:29.038502     719 scope.go:117] "RemoveContainer" containerID="03b4b2e7ae7513501020a8cf5ab1076c525e6821801db3a0e7c43fa9cbab9682"
	Nov 21 14:39:29 no-preload-589411 kubelet[719]: I1121 14:39:29.137646     719 scope.go:117] "RemoveContainer" containerID="03b4b2e7ae7513501020a8cf5ab1076c525e6821801db3a0e7c43fa9cbab9682"
	Nov 21 14:39:29 no-preload-589411 kubelet[719]: I1121 14:39:29.137913     719 scope.go:117] "RemoveContainer" containerID="6aeef65086fdad560a7dbaf32d497a70041b1fcc99047c64fe71950c6ba3d738"
	Nov 21 14:39:29 no-preload-589411 kubelet[719]: E1121 14:39:29.138094     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q5hp6_kubernetes-dashboard(60441beb-0c70-4ba3-9e8a-d744004cc985)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q5hp6" podUID="60441beb-0c70-4ba3-9e8a-d744004cc985"
	Nov 21 14:39:36 no-preload-589411 kubelet[719]: I1121 14:39:36.158005     719 scope.go:117] "RemoveContainer" containerID="f18e8753eb0e613d6889e318b6e9ed46a29e321d61c64f6961a151ca05dfc2d3"
	Nov 21 14:39:36 no-preload-589411 kubelet[719]: I1121 14:39:36.404823     719 scope.go:117] "RemoveContainer" containerID="6aeef65086fdad560a7dbaf32d497a70041b1fcc99047c64fe71950c6ba3d738"
	Nov 21 14:39:36 no-preload-589411 kubelet[719]: E1121 14:39:36.405025     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q5hp6_kubernetes-dashboard(60441beb-0c70-4ba3-9e8a-d744004cc985)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q5hp6" podUID="60441beb-0c70-4ba3-9e8a-d744004cc985"
	Nov 21 14:39:49 no-preload-589411 kubelet[719]: I1121 14:39:49.038968     719 scope.go:117] "RemoveContainer" containerID="6aeef65086fdad560a7dbaf32d497a70041b1fcc99047c64fe71950c6ba3d738"
	Nov 21 14:39:49 no-preload-589411 kubelet[719]: E1121 14:39:49.039141     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q5hp6_kubernetes-dashboard(60441beb-0c70-4ba3-9e8a-d744004cc985)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q5hp6" podUID="60441beb-0c70-4ba3-9e8a-d744004cc985"
	Nov 21 14:39:54 no-preload-589411 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 14:39:54 no-preload-589411 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 14:39:54 no-preload-589411 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 21 14:39:54 no-preload-589411 systemd[1]: kubelet.service: Consumed 1.502s CPU time.
	
	
	==> kubernetes-dashboard [a5ce31187d2431d019c4a829d693487a4499b5874adf7bc5244c94414888a880] <==
	2025/11/21 14:39:17 Using namespace: kubernetes-dashboard
	2025/11/21 14:39:17 Using in-cluster config to connect to apiserver
	2025/11/21 14:39:17 Using secret token for csrf signing
	2025/11/21 14:39:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/21 14:39:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/21 14:39:17 Successful initial request to the apiserver, version: v1.34.1
	2025/11/21 14:39:17 Generating JWE encryption key
	2025/11/21 14:39:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/21 14:39:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/21 14:39:17 Initializing JWE encryption key from synchronized object
	2025/11/21 14:39:17 Creating in-cluster Sidecar client
	2025/11/21 14:39:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 14:39:17 Serving insecurely on HTTP port: 9090
	2025/11/21 14:39:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 14:39:17 Starting overwatch
	
	
	==> storage-provisioner [a6c5bc2e5adccf877569ec8c359d7f1cc50809152c4db1a39a7188aac936ef93] <==
	I1121 14:39:36.217226       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:39:36.223981       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:39:36.224014       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 14:39:36.225783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:39.680335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:43.940671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:47.538984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:50.592918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:53.615552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:53.624401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:39:53.624612       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:39:53.624749       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f02b2028-b558-4c82-b860-22cca0fa7d7b", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-589411_8487a500-cdf9-42af-9fdd-aee5fb04050e became leader
	I1121 14:39:53.624772       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-589411_8487a500-cdf9-42af-9fdd-aee5fb04050e!
	W1121 14:39:53.627889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:53.633956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:39:53.725246       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-589411_8487a500-cdf9-42af-9fdd-aee5fb04050e!
	W1121 14:39:55.637356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:55.641498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:57.645787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:57.652542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:59.656212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:39:59.661233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f18e8753eb0e613d6889e318b6e9ed46a29e321d61c64f6961a151ca05dfc2d3] <==
	I1121 14:39:05.405004       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1121 14:39:35.408945       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-589411 -n no-preload-589411
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-589411 -n no-preload-589411: exit status 2 (369.840606ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-589411 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-696683 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-696683 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (260.990909ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:40:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-696683 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-696683
helpers_test.go:243: (dbg) docker inspect newest-cni-696683:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5aacf10261f216094844f8e03cf2bc73194284e3d15f0368fc3a8ff21809591d",
	        "Created": "2025-11-21T14:40:09.858539205Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 270921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:40:09.901633112Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/5aacf10261f216094844f8e03cf2bc73194284e3d15f0368fc3a8ff21809591d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5aacf10261f216094844f8e03cf2bc73194284e3d15f0368fc3a8ff21809591d/hostname",
	        "HostsPath": "/var/lib/docker/containers/5aacf10261f216094844f8e03cf2bc73194284e3d15f0368fc3a8ff21809591d/hosts",
	        "LogPath": "/var/lib/docker/containers/5aacf10261f216094844f8e03cf2bc73194284e3d15f0368fc3a8ff21809591d/5aacf10261f216094844f8e03cf2bc73194284e3d15f0368fc3a8ff21809591d-json.log",
	        "Name": "/newest-cni-696683",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-696683:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-696683",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5aacf10261f216094844f8e03cf2bc73194284e3d15f0368fc3a8ff21809591d",
	                "LowerDir": "/var/lib/docker/overlay2/655e2907b15a841ba8d7c09b0eecf0c4c7a490c173b62e8a174062781efe4d9f-init/diff:/var/lib/docker/overlay2/52a35d389e2d282a009a54c52e3f2c3f22a4d7ab5a2644a29d16127d21682576/diff",
	                "MergedDir": "/var/lib/docker/overlay2/655e2907b15a841ba8d7c09b0eecf0c4c7a490c173b62e8a174062781efe4d9f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/655e2907b15a841ba8d7c09b0eecf0c4c7a490c173b62e8a174062781efe4d9f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/655e2907b15a841ba8d7c09b0eecf0c4c7a490c173b62e8a174062781efe4d9f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-696683",
	                "Source": "/var/lib/docker/volumes/newest-cni-696683/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-696683",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-696683",
	                "name.minikube.sigs.k8s.io": "newest-cni-696683",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "271d11e96737a122fdcd238daefe7313c1df35de2473ecea9f785ddd4d7de8ff",
	            "SandboxKey": "/var/run/docker/netns/271d11e96737",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-696683": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3b7fce235b16a39fb4cd51190508048f90b9443938b78208046c510cbfbee936",
	                    "EndpointID": "449743b73df6f46189467183405367819765ad26ecb88471692aee61bfac688e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "5e:ea:86:ff:de:0b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-696683",
	                        "5aacf10261f2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-696683 -n newest-cni-696683
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-696683 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-794941 image list --format=json                                                                                                                                                                                               │ old-k8s-version-794941       │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ pause   │ -p old-k8s-version-794941 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-794941       │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-589411 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:38 UTC │
	│ start   │ -p no-preload-589411 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:39 UTC │
	│ delete  │ -p old-k8s-version-794941                                                                                                                                                                                                                     │ old-k8s-version-794941       │ jenkins │ v1.37.0 │ 21 Nov 25 14:38 UTC │ 21 Nov 25 14:39 UTC │
	│ delete  │ -p old-k8s-version-794941                                                                                                                                                                                                                     │ old-k8s-version-794941       │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:39 UTC │
	│ start   │ -p embed-certs-441390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:39 UTC │
	│ start   │ -p kubernetes-upgrade-214044 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-214044    │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ start   │ -p kubernetes-upgrade-214044 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-214044    │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:39 UTC │
	│ addons  │ enable metrics-server -p embed-certs-441390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ image   │ no-preload-589411 image list --format=json                                                                                                                                                                                                    │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:39 UTC │
	│ pause   │ -p no-preload-589411 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ stop    │ -p embed-certs-441390 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p cert-expiration-046125 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-046125       │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:40 UTC │
	│ delete  │ -p kubernetes-upgrade-214044                                                                                                                                                                                                                  │ kubernetes-upgrade-214044    │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:40 UTC │
	│ delete  │ -p disable-driver-mounts-708207                                                                                                                                                                                                               │ disable-driver-mounts-708207 │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p default-k8s-diff-port-859276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-859276 │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	│ delete  │ -p no-preload-589411                                                                                                                                                                                                                          │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ delete  │ -p cert-expiration-046125                                                                                                                                                                                                                     │ cert-expiration-046125       │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ delete  │ -p no-preload-589411                                                                                                                                                                                                                          │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p newest-cni-696683 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p auto-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-441390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p embed-certs-441390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-696683 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:40:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:40:12.049512  271969 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:40:12.049647  271969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:40:12.049664  271969 out.go:374] Setting ErrFile to fd 2...
	I1121 14:40:12.049671  271969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:40:12.049961  271969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:40:12.050587  271969 out.go:368] Setting JSON to false
	I1121 14:40:12.051999  271969 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4961,"bootTime":1763731051,"procs":267,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:40:12.052113  271969 start.go:143] virtualization: kvm guest
	I1121 14:40:12.055669  271969 out.go:179] * [embed-certs-441390] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:40:12.057013  271969 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:40:12.057018  271969 notify.go:221] Checking for updates...
	I1121 14:40:12.058797  271969 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:40:12.064030  271969 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:40:12.065586  271969 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 14:40:12.066734  271969 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:40:12.068030  271969 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:40:12.069744  271969 config.go:182] Loaded profile config "embed-certs-441390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:12.070383  271969 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:40:12.099907  271969 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:40:12.100014  271969 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:40:12.168613  271969 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:83 SystemTime:2025-11-21 14:40:12.157850684 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:40:12.168766  271969 docker.go:319] overlay module found
	I1121 14:40:12.172956  271969 out.go:179] * Using the docker driver based on existing profile
	I1121 14:40:12.174092  271969 start.go:309] selected driver: docker
	I1121 14:40:12.174113  271969 start.go:930] validating driver "docker" against &{Name:embed-certs-441390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-441390 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:40:12.174213  271969 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:40:12.174996  271969 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:40:12.243409  271969 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:83 SystemTime:2025-11-21 14:40:12.233360149 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:40:12.243842  271969 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:40:12.243878  271969 cni.go:84] Creating CNI manager for ""
	I1121 14:40:12.243936  271969 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:40:12.243998  271969 start.go:353] cluster config:
	{Name:embed-certs-441390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-441390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:40:12.246140  271969 out.go:179] * Starting "embed-certs-441390" primary control-plane node in "embed-certs-441390" cluster
	I1121 14:40:12.247249  271969 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:40:12.248436  271969 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:40:12.250435  271969 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:40:12.250470  271969 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1121 14:40:12.250481  271969 cache.go:65] Caching tarball of preloaded images
	I1121 14:40:12.250533  271969 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:40:12.250592  271969 preload.go:238] Found /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1121 14:40:12.250618  271969 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 14:40:12.250745  271969 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/embed-certs-441390/config.json ...
	I1121 14:40:12.274529  271969 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:40:12.274549  271969 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:40:12.274588  271969 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:40:12.274630  271969 start.go:360] acquireMachinesLock for embed-certs-441390: {Name:mkcbbe6204b069f72e81cddad318e7cece002367 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:40:12.274708  271969 start.go:364] duration metric: took 54.853µs to acquireMachinesLock for "embed-certs-441390"
	I1121 14:40:12.274739  271969 start.go:96] Skipping create...Using existing machine configuration
	I1121 14:40:12.274750  271969 fix.go:54] fixHost starting: 
	I1121 14:40:12.274982  271969 cli_runner.go:164] Run: docker container inspect embed-certs-441390 --format={{.State.Status}}
	I1121 14:40:12.294638  271969 fix.go:112] recreateIfNeeded on embed-certs-441390: state=Stopped err=<nil>
	W1121 14:40:12.294671  271969 fix.go:138] unexpected machine state, will restart: <nil>
	I1121 14:40:10.560457  268814 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:40:10.976240  268814 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:10.996974  268814 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:40:10.997001  268814 kic_runner.go:114] Args: [docker exec --privileged newest-cni-696683 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:40:11.166210  268814 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:11.190326  268814 machine.go:94] provisionDockerMachine start ...
	I1121 14:40:11.190423  268814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:11.211032  268814 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:11.211353  268814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1121 14:40:11.211373  268814 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:40:11.355250  268814 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-696683
	
	I1121 14:40:11.355280  268814 ubuntu.go:182] provisioning hostname "newest-cni-696683"
	I1121 14:40:11.355339  268814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:11.374108  268814 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:11.374312  268814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1121 14:40:11.374325  268814 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-696683 && echo "newest-cni-696683" | sudo tee /etc/hostname
	I1121 14:40:11.629970  268814 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-696683
	
	I1121 14:40:11.630074  268814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:11.650661  268814 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:11.650944  268814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1121 14:40:11.650981  268814 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-696683' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-696683/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-696683' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:40:11.818212  268814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:40:11.818278  268814 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11045/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11045/.minikube}
	I1121 14:40:11.818309  268814 ubuntu.go:190] setting up certificates
	I1121 14:40:11.818321  268814 provision.go:84] configureAuth start
	I1121 14:40:11.818377  268814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-696683
	I1121 14:40:11.840801  268814 provision.go:143] copyHostCerts
	I1121 14:40:11.840874  268814 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem, removing ...
	I1121 14:40:11.840891  268814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem
	I1121 14:40:11.841597  268814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem (1078 bytes)
	I1121 14:40:11.841790  268814 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem, removing ...
	I1121 14:40:11.841806  268814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem
	I1121 14:40:11.841855  268814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem (1123 bytes)
	I1121 14:40:11.841960  268814 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem, removing ...
	I1121 14:40:11.841973  268814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem
	I1121 14:40:11.842018  268814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem (1679 bytes)
	I1121 14:40:11.842113  268814 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem org=jenkins.newest-cni-696683 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-696683]
	I1121 14:40:12.472043  268814 provision.go:177] copyRemoteCerts
	I1121 14:40:12.472122  268814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:40:12.472173  268814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:12.492713  268814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:12.591043  268814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:40:12.618282  268814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1121 14:40:12.644459  268814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 14:40:12.666193  268814 provision.go:87] duration metric: took 847.858217ms to configureAuth
	I1121 14:40:12.666225  268814 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:40:12.666433  268814 config.go:182] Loaded profile config "newest-cni-696683": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:12.666607  268814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:12.687231  268814 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:12.687430  268814 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I1121 14:40:12.687446  268814 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:40:13.121742  268814 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:40:13.121766  268814 machine.go:97] duration metric: took 1.931416959s to provisionDockerMachine
	I1121 14:40:13.121778  268814 client.go:176] duration metric: took 7.464758217s to LocalClient.Create
	I1121 14:40:13.121800  268814 start.go:167] duration metric: took 7.464830235s to libmachine.API.Create "newest-cni-696683"
	I1121 14:40:13.121817  268814 start.go:293] postStartSetup for "newest-cni-696683" (driver="docker")
	I1121 14:40:13.121832  268814 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:40:13.121897  268814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:40:13.121933  268814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:13.138113  268814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:13.233820  268814 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:40:13.237198  268814 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:40:13.237223  268814 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:40:13.237232  268814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/addons for local assets ...
	I1121 14:40:13.237270  268814 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/files for local assets ...
	I1121 14:40:13.237375  268814 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem -> 145422.pem in /etc/ssl/certs
	I1121 14:40:13.237494  268814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:40:13.244683  268814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:40:13.403719  268814 start.go:296] duration metric: took 281.885631ms for postStartSetup
	I1121 14:40:13.404051  268814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-696683
	I1121 14:40:13.421157  268814 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/config.json ...
	I1121 14:40:13.464969  268814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:40:13.465017  268814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:13.482175  268814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:13.572273  268814 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:40:13.576460  268814 start.go:128] duration metric: took 7.926513752s to createHost
	I1121 14:40:13.576484  268814 start.go:83] releasing machines lock for "newest-cni-696683", held for 7.926661651s
	I1121 14:40:13.576551  268814 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-696683
	I1121 14:40:13.594382  268814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:40:13.594454  268814 ssh_runner.go:195] Run: cat /version.json
	I1121 14:40:13.594463  268814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:13.594548  268814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:13.612874  268814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:13.613835  268814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:13.702944  268814 ssh_runner.go:195] Run: systemctl --version
	I1121 14:40:13.756192  268814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:40:13.788493  268814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:40:13.792745  268814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:40:13.792806  268814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:40:13.936221  268814 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 14:40:13.936242  268814 start.go:496] detecting cgroup driver to use...
	I1121 14:40:13.936274  268814 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:40:13.936320  268814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:40:13.960742  268814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:40:13.979744  268814 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:40:13.979801  268814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:40:14.002416  268814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:40:14.023807  268814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:40:14.123620  268814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:40:14.223430  268814 docker.go:234] disabling docker service ...
	I1121 14:40:14.223506  268814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:40:14.243281  268814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:40:14.255809  268814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:40:14.340920  268814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:40:14.441759  268814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:40:14.455189  268814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:40:14.473946  268814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 14:40:14.474009  268814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:14.487140  268814 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1121 14:40:14.487198  268814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:14.495800  268814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:14.503592  268814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:14.511433  268814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:40:14.518952  268814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:14.526961  268814 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:14.541653  268814 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:14.552875  268814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:40:14.562788  268814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:40:14.570066  268814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:40:14.699965  268814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:40:14.839787  268814 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:40:14.840013  268814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:40:14.844948  268814 start.go:564] Will wait 60s for crictl version
	I1121 14:40:14.845011  268814 ssh_runner.go:195] Run: which crictl
	I1121 14:40:14.848793  268814 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:40:14.876852  268814 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:40:14.876927  268814 ssh_runner.go:195] Run: crio --version
	I1121 14:40:14.910226  268814 ssh_runner.go:195] Run: crio --version
	I1121 14:40:14.940269  268814 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 14:40:14.941791  268814 cli_runner.go:164] Run: docker network inspect newest-cni-696683 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:40:14.960321  268814 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:40:14.964411  268814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:40:14.975467  268814 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1121 14:40:10.344784  266798 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:40:10.375331  266798 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276 for IP: 192.168.76.2
	I1121 14:40:10.375354  266798 certs.go:195] generating shared ca certs ...
	I1121 14:40:10.375374  266798 certs.go:227] acquiring lock for ca certs: {Name:mkde3a7d6f17b238f06eab3a140993599f1b4367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:10.375541  266798 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key
	I1121 14:40:10.375618  266798 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key
	I1121 14:40:10.375634  266798 certs.go:257] generating profile certs ...
	I1121 14:40:10.375703  266798 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/client.key
	I1121 14:40:10.375729  266798 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/client.crt with IP's: []
	I1121 14:40:11.013049  266798 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/client.crt ...
	I1121 14:40:11.013085  266798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/client.crt: {Name:mk472b52ffb519b30f8bd8bc8f303671eb94b763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:11.013282  266798 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/client.key ...
	I1121 14:40:11.013301  266798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/client.key: {Name:mk100db600b2ce4b399249533b30d6795f2823ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:11.013437  266798 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/apiserver.key.25426533
	I1121 14:40:11.013463  266798 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/apiserver.crt.25426533 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1121 14:40:11.190993  266798 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/apiserver.crt.25426533 ...
	I1121 14:40:11.191026  266798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/apiserver.crt.25426533: {Name:mkf7cea5d4a77273f120d555c9833af330b81f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:11.191211  266798 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/apiserver.key.25426533 ...
	I1121 14:40:11.191233  266798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/apiserver.key.25426533: {Name:mk83f5293fac6239f6177854a8626b5af950be4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:11.191347  266798 certs.go:382] copying /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/apiserver.crt.25426533 -> /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/apiserver.crt
	I1121 14:40:11.191448  266798 certs.go:386] copying /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/apiserver.key.25426533 -> /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/apiserver.key
	I1121 14:40:11.191535  266798 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/proxy-client.key
	I1121 14:40:11.191585  266798 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/proxy-client.crt with IP's: []
	I1121 14:40:11.428549  266798 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/proxy-client.crt ...
	I1121 14:40:11.428594  266798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/proxy-client.crt: {Name:mkc1a1dc74183ca6cb6d453738dd82d8c31d7c31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:11.504230  266798 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/proxy-client.key ...
	I1121 14:40:11.504264  266798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/proxy-client.key: {Name:mke1c7013634fd34467027b60bcd7b7a2a114f0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:11.504544  266798 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem (1338 bytes)
	W1121 14:40:11.504619  266798 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542_empty.pem, impossibly tiny 0 bytes
	I1121 14:40:11.504632  266798 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:40:11.504668  266798 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:40:11.504712  266798 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:40:11.504741  266798 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem (1679 bytes)
	I1121 14:40:11.504795  266798 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:40:11.505627  266798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:40:11.639606  266798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:40:11.670911  266798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:40:11.706986  266798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1121 14:40:11.744273  266798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1121 14:40:11.765615  266798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:40:11.785048  266798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:40:11.804627  266798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/default-k8s-diff-port-859276/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:40:11.824498  266798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:40:11.850555  266798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem --> /usr/share/ca-certificates/14542.pem (1338 bytes)
	I1121 14:40:11.871245  266798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /usr/share/ca-certificates/145422.pem (1708 bytes)
	I1121 14:40:11.893761  266798 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:40:11.908115  266798 ssh_runner.go:195] Run: openssl version
	I1121 14:40:11.914936  266798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:40:11.925471  266798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:11.929777  266798 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:11.929864  266798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:11.977831  266798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:40:11.987989  266798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14542.pem && ln -fs /usr/share/ca-certificates/14542.pem /etc/ssl/certs/14542.pem"
	I1121 14:40:11.996273  266798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14542.pem
	I1121 14:40:11.999884  266798 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14542.pem
	I1121 14:40:11.999939  266798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14542.pem
	I1121 14:40:12.048753  266798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14542.pem /etc/ssl/certs/51391683.0"
	I1121 14:40:12.059011  266798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145422.pem && ln -fs /usr/share/ca-certificates/145422.pem /etc/ssl/certs/145422.pem"
	I1121 14:40:12.067851  266798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145422.pem
	I1121 14:40:12.071999  266798 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145422.pem
	I1121 14:40:12.072047  266798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145422.pem
	I1121 14:40:12.112723  266798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145422.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:40:12.124448  266798 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:40:12.129139  266798 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:40:12.129222  266798 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-859276 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-859276 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:40:12.129306  266798 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:40:12.129350  266798 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:40:12.164174  266798 cri.go:89] found id: ""
	I1121 14:40:12.164239  266798 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:40:12.172730  266798 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:40:12.181117  266798 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:40:12.181167  266798 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:40:12.189955  266798 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:40:12.189973  266798 kubeadm.go:158] found existing configuration files:
	
	I1121 14:40:12.190017  266798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1121 14:40:12.201456  266798 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:40:12.201529  266798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:40:12.212181  266798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1121 14:40:12.222796  266798 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:40:12.222867  266798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:40:12.231925  266798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1121 14:40:12.241227  266798 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:40:12.241285  266798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:40:12.249186  266798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1121 14:40:12.257657  266798 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:40:12.257711  266798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:40:12.266543  266798 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:40:12.346608  266798 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:40:12.416484  266798 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:40:14.976375  268814 kubeadm.go:884] updating cluster {Name:newest-cni-696683 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:40:14.976495  268814 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:40:14.976545  268814 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:40:15.014023  268814 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:40:15.014044  268814 crio.go:433] Images already preloaded, skipping extraction
	I1121 14:40:15.014089  268814 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:40:15.040032  268814 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:40:15.040050  268814 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:40:15.040057  268814 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1121 14:40:15.040128  268814 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-696683 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:40:15.040184  268814 ssh_runner.go:195] Run: crio config
	I1121 14:40:15.087431  268814 cni.go:84] Creating CNI manager for ""
	I1121 14:40:15.087462  268814 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:40:15.087488  268814 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1121 14:40:15.087512  268814 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-696683 NodeName:newest-cni-696683 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:40:15.087678  268814 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-696683"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:40:15.087731  268814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:40:15.095275  268814 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:40:15.095331  268814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:40:15.102704  268814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1121 14:40:15.114284  268814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:40:15.128447  268814 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1121 14:40:15.139983  268814 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:40:15.143618  268814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:40:15.153344  268814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:40:15.250057  268814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:40:15.286805  268814 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683 for IP: 192.168.85.2
	I1121 14:40:15.286828  268814 certs.go:195] generating shared ca certs ...
	I1121 14:40:15.286850  268814 certs.go:227] acquiring lock for ca certs: {Name:mkde3a7d6f17b238f06eab3a140993599f1b4367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:15.287006  268814 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key
	I1121 14:40:15.287056  268814 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key
	I1121 14:40:15.287070  268814 certs.go:257] generating profile certs ...
	I1121 14:40:15.287167  268814 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/client.key
	I1121 14:40:15.287194  268814 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/client.crt with IP's: []
	I1121 14:40:15.436419  268814 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/client.crt ...
	I1121 14:40:15.436447  268814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/client.crt: {Name:mkbf5f873c7dbbf57f063b1f07e104af1c425b4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:15.436607  268814 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/client.key ...
	I1121 14:40:15.436618  268814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/client.key: {Name:mkc303cbf996f6cd9b6e54ec33223994572105f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:15.436691  268814 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.key.78303e51
	I1121 14:40:15.436712  268814 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.crt.78303e51 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1121 14:40:13.944276  269911 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-989875:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (4.861961404s)
	I1121 14:40:13.944314  269911 kic.go:203] duration metric: took 4.862115579s to extract preloaded images to volume ...
	W1121 14:40:13.944405  269911 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1121 14:40:13.944446  269911 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1121 14:40:13.944489  269911 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 14:40:14.017355  269911 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-989875 --name auto-989875 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-989875 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-989875 --network auto-989875 --ip 192.168.103.2 --volume auto-989875:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
	I1121 14:40:14.329582  269911 cli_runner.go:164] Run: docker container inspect auto-989875 --format={{.State.Running}}
	I1121 14:40:14.349859  269911 cli_runner.go:164] Run: docker container inspect auto-989875 --format={{.State.Status}}
	I1121 14:40:14.369655  269911 cli_runner.go:164] Run: docker exec auto-989875 stat /var/lib/dpkg/alternatives/iptables
	I1121 14:40:14.421105  269911 oci.go:144] the created container "auto-989875" has a running status.
	I1121 14:40:14.421137  269911 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21847-11045/.minikube/machines/auto-989875/id_rsa...
	I1121 14:40:14.721468  269911 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21847-11045/.minikube/machines/auto-989875/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 14:40:14.749050  269911 cli_runner.go:164] Run: docker container inspect auto-989875 --format={{.State.Status}}
	I1121 14:40:14.768025  269911 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 14:40:14.768041  269911 kic_runner.go:114] Args: [docker exec --privileged auto-989875 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 14:40:14.809551  269911 cli_runner.go:164] Run: docker container inspect auto-989875 --format={{.State.Status}}
	I1121 14:40:14.828590  269911 machine.go:94] provisionDockerMachine start ...
	I1121 14:40:14.828678  269911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989875
	I1121 14:40:14.848598  269911 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:14.848976  269911 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1121 14:40:14.848996  269911 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:40:14.989702  269911 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-989875
	
	I1121 14:40:14.989732  269911 ubuntu.go:182] provisioning hostname "auto-989875"
	I1121 14:40:14.989786  269911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989875
	I1121 14:40:15.011485  269911 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:15.011772  269911 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1121 14:40:15.011791  269911 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-989875 && echo "auto-989875" | sudo tee /etc/hostname
	I1121 14:40:15.153094  269911 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-989875
	
	I1121 14:40:15.153173  269911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989875
	I1121 14:40:15.171932  269911 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:15.172201  269911 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1121 14:40:15.172223  269911 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-989875' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-989875/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-989875' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:40:15.305323  269911 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:40:15.305350  269911 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11045/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11045/.minikube}
	I1121 14:40:15.305391  269911 ubuntu.go:190] setting up certificates
	I1121 14:40:15.305407  269911 provision.go:84] configureAuth start
	I1121 14:40:15.305460  269911 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-989875
	I1121 14:40:15.326209  269911 provision.go:143] copyHostCerts
	I1121 14:40:15.326273  269911 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem, removing ...
	I1121 14:40:15.326287  269911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem
	I1121 14:40:15.326373  269911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem (1679 bytes)
	I1121 14:40:15.326483  269911 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem, removing ...
	I1121 14:40:15.326494  269911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem
	I1121 14:40:15.326537  269911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem (1078 bytes)
	I1121 14:40:15.326652  269911 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem, removing ...
	I1121 14:40:15.326665  269911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem
	I1121 14:40:15.326705  269911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem (1123 bytes)
	I1121 14:40:15.326786  269911 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem org=jenkins.auto-989875 san=[127.0.0.1 192.168.103.2 auto-989875 localhost minikube]
	I1121 14:40:15.525232  269911 provision.go:177] copyRemoteCerts
	I1121 14:40:15.525284  269911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:40:15.525316  269911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989875
	I1121 14:40:15.543000  269911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/auto-989875/id_rsa Username:docker}
	I1121 14:40:15.636908  269911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:40:15.655055  269911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1121 14:40:15.671490  269911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 14:40:15.688549  269911 provision.go:87] duration metric: took 383.130888ms to configureAuth
	I1121 14:40:15.688618  269911 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:40:15.688789  269911 config.go:182] Loaded profile config "auto-989875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:15.688895  269911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989875
	I1121 14:40:15.707262  269911 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:15.707485  269911 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33094 <nil> <nil>}
	I1121 14:40:15.707503  269911 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:40:15.975809  269911 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:40:15.975833  269911 machine.go:97] duration metric: took 1.147220179s to provisionDockerMachine
	I1121 14:40:15.975844  269911 client.go:176] duration metric: took 8.924626816s to LocalClient.Create
	I1121 14:40:15.975860  269911 start.go:167] duration metric: took 8.92468678s to libmachine.API.Create "auto-989875"
	I1121 14:40:15.975868  269911 start.go:293] postStartSetup for "auto-989875" (driver="docker")
	I1121 14:40:15.975880  269911 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:40:15.975941  269911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:40:15.975984  269911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989875
	I1121 14:40:15.997061  269911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/auto-989875/id_rsa Username:docker}
	I1121 14:40:16.095791  269911 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:40:16.099212  269911 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:40:16.099246  269911 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:40:16.099257  269911 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/addons for local assets ...
	I1121 14:40:16.099313  269911 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/files for local assets ...
	I1121 14:40:16.099424  269911 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem -> 145422.pem in /etc/ssl/certs
	I1121 14:40:16.099554  269911 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:40:16.108889  269911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:40:16.135859  269911 start.go:296] duration metric: took 159.978026ms for postStartSetup
	I1121 14:40:16.136242  269911 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-989875
	I1121 14:40:16.157250  269911 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/config.json ...
	I1121 14:40:16.157519  269911 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:40:16.157594  269911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989875
	I1121 14:40:16.176443  269911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/auto-989875/id_rsa Username:docker}
	I1121 14:40:16.267438  269911 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:40:16.272122  269911 start.go:128] duration metric: took 9.222966753s to createHost
	I1121 14:40:16.272143  269911 start.go:83] releasing machines lock for "auto-989875", held for 9.223106494s
	I1121 14:40:16.272198  269911 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-989875
	I1121 14:40:16.290408  269911 ssh_runner.go:195] Run: cat /version.json
	I1121 14:40:16.290457  269911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989875
	I1121 14:40:16.290489  269911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:40:16.290574  269911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989875
	I1121 14:40:16.310610  269911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/auto-989875/id_rsa Username:docker}
	I1121 14:40:16.311970  269911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/auto-989875/id_rsa Username:docker}
	I1121 14:40:16.464034  269911 ssh_runner.go:195] Run: systemctl --version
	I1121 14:40:16.470024  269911 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:40:16.508642  269911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:40:16.512959  269911 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:40:16.513015  269911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:40:16.538591  269911 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 14:40:16.538611  269911 start.go:496] detecting cgroup driver to use...
	I1121 14:40:16.538641  269911 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:40:16.538722  269911 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:40:16.556040  269911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:40:16.567711  269911 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:40:16.567754  269911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:40:16.584200  269911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:40:16.604482  269911 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:40:16.691003  269911 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:40:16.789449  269911 docker.go:234] disabling docker service ...
	I1121 14:40:16.789505  269911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:40:16.810453  269911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:40:16.823141  269911 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:40:15.862584  268814 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.crt.78303e51 ...
	I1121 14:40:15.862611  268814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.crt.78303e51: {Name:mk4222f687195ded49a6b2f0974a124e53cc2a96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:15.862764  268814 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.key.78303e51 ...
	I1121 14:40:15.862786  268814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.key.78303e51: {Name:mk660100da518ca2582c8e3efdcad8ebbcd34477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:15.862901  268814 certs.go:382] copying /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.crt.78303e51 -> /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.crt
	I1121 14:40:15.863009  268814 certs.go:386] copying /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.key.78303e51 -> /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.key
	I1121 14:40:15.863101  268814 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/proxy-client.key
	I1121 14:40:15.863126  268814 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/proxy-client.crt with IP's: []
	I1121 14:40:16.252619  268814 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/proxy-client.crt ...
	I1121 14:40:16.252641  268814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/proxy-client.crt: {Name:mk5a94c13476af6bef5e4bddc8a6d379c3723997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:16.252785  268814 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/proxy-client.key ...
	I1121 14:40:16.252797  268814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/proxy-client.key: {Name:mk225011872153a923f6cb486d208742a95db4fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:16.252966  268814 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem (1338 bytes)
	W1121 14:40:16.253000  268814 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542_empty.pem, impossibly tiny 0 bytes
	I1121 14:40:16.253009  268814 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:40:16.253030  268814 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:40:16.253050  268814 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:40:16.253072  268814 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem (1679 bytes)
	I1121 14:40:16.253108  268814 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:40:16.253653  268814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:40:16.271839  268814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:40:16.289822  268814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:40:16.311998  268814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1121 14:40:16.328732  268814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 14:40:16.348038  268814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:40:16.368489  268814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:40:16.385779  268814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:40:16.402088  268814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:40:16.421829  268814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem --> /usr/share/ca-certificates/14542.pem (1338 bytes)
	I1121 14:40:16.437594  268814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /usr/share/ca-certificates/145422.pem (1708 bytes)
	I1121 14:40:16.453000  268814 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:40:16.464291  268814 ssh_runner.go:195] Run: openssl version
	I1121 14:40:16.470146  268814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14542.pem && ln -fs /usr/share/ca-certificates/14542.pem /etc/ssl/certs/14542.pem"
	I1121 14:40:16.478414  268814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14542.pem
	I1121 14:40:16.482429  268814 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14542.pem
	I1121 14:40:16.482471  268814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14542.pem
	I1121 14:40:16.522903  268814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14542.pem /etc/ssl/certs/51391683.0"
	I1121 14:40:16.533018  268814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145422.pem && ln -fs /usr/share/ca-certificates/145422.pem /etc/ssl/certs/145422.pem"
	I1121 14:40:16.542836  268814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145422.pem
	I1121 14:40:16.546947  268814 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145422.pem
	I1121 14:40:16.546994  268814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145422.pem
	I1121 14:40:16.582694  268814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145422.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:40:16.592088  268814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:40:16.602292  268814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:16.606094  268814 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:16.606141  268814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:16.651109  268814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:40:16.659145  268814 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:40:16.662894  268814 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:40:16.662953  268814 kubeadm.go:401] StartCluster: {Name:newest-cni-696683 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:40:16.663044  268814 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:40:16.663093  268814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:40:16.693707  268814 cri.go:89] found id: ""
	I1121 14:40:16.693775  268814 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:40:16.701399  268814 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:40:16.708772  268814 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:40:16.708822  268814 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:40:16.716958  268814 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:40:16.716975  268814 kubeadm.go:158] found existing configuration files:
	
	I1121 14:40:16.717023  268814 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:40:16.730853  268814 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:40:16.730903  268814 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:40:16.741095  268814 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:40:16.748277  268814 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:40:16.748324  268814 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:40:16.755489  268814 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:40:16.762626  268814 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:40:16.762678  268814 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:40:16.769655  268814 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:40:16.777876  268814 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:40:16.777920  268814 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:40:16.785419  268814 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:40:16.830106  268814 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:40:16.830201  268814 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:40:16.856194  268814 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:40:16.856286  268814 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:40:16.856330  268814 kubeadm.go:319] OS: Linux
	I1121 14:40:16.856395  268814 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:40:16.856456  268814 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:40:16.856522  268814 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:40:16.856614  268814 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:40:16.856683  268814 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:40:16.856756  268814 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:40:16.856825  268814 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:40:16.856887  268814 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:40:16.937463  268814 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:40:16.937651  268814 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:40:16.937793  268814 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:40:16.947317  268814 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:40:12.296535  271969 out.go:252] * Restarting existing docker container for "embed-certs-441390" ...
	I1121 14:40:12.296608  271969 cli_runner.go:164] Run: docker start embed-certs-441390
	I1121 14:40:12.600533  271969 cli_runner.go:164] Run: docker container inspect embed-certs-441390 --format={{.State.Status}}
	I1121 14:40:12.622552  271969 kic.go:430] container "embed-certs-441390" state is running.
	I1121 14:40:12.622964  271969 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-441390
	I1121 14:40:12.646781  271969 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/embed-certs-441390/config.json ...
	I1121 14:40:12.647028  271969 machine.go:94] provisionDockerMachine start ...
	I1121 14:40:12.647102  271969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-441390
	I1121 14:40:12.669182  271969 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:12.669489  271969 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I1121 14:40:12.669507  271969 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:40:12.670207  271969 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42814->127.0.0.1:33089: read: connection reset by peer
	I1121 14:40:15.804700  271969 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-441390
	
	I1121 14:40:15.804729  271969 ubuntu.go:182] provisioning hostname "embed-certs-441390"
	I1121 14:40:15.804785  271969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-441390
	I1121 14:40:15.822433  271969 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:15.822774  271969 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I1121 14:40:15.822792  271969 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-441390 && echo "embed-certs-441390" | sudo tee /etc/hostname
	I1121 14:40:15.964946  271969 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-441390
	
	I1121 14:40:15.965024  271969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-441390
	I1121 14:40:15.984417  271969 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:15.984687  271969 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I1121 14:40:15.984710  271969 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-441390' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-441390/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-441390' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:40:16.116675  271969 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:40:16.116726  271969 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11045/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11045/.minikube}
	I1121 14:40:16.116754  271969 ubuntu.go:190] setting up certificates
	I1121 14:40:16.116764  271969 provision.go:84] configureAuth start
	I1121 14:40:16.116831  271969 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-441390
	I1121 14:40:16.137578  271969 provision.go:143] copyHostCerts
	I1121 14:40:16.137638  271969 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem, removing ...
	I1121 14:40:16.137663  271969 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem
	I1121 14:40:16.137724  271969 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem (1078 bytes)
	I1121 14:40:16.137839  271969 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem, removing ...
	I1121 14:40:16.137855  271969 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem
	I1121 14:40:16.137889  271969 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem (1123 bytes)
	I1121 14:40:16.137977  271969 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem, removing ...
	I1121 14:40:16.137987  271969 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem
	I1121 14:40:16.138011  271969 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem (1679 bytes)
	I1121 14:40:16.138078  271969 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem org=jenkins.embed-certs-441390 san=[127.0.0.1 192.168.94.2 embed-certs-441390 localhost minikube]
	I1121 14:40:16.350174  271969 provision.go:177] copyRemoteCerts
	I1121 14:40:16.350233  271969 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:40:16.350280  271969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-441390
	I1121 14:40:16.373425  271969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/embed-certs-441390/id_rsa Username:docker}
	I1121 14:40:16.472169  271969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:40:16.488823  271969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1121 14:40:16.506454  271969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 14:40:16.523641  271969 provision.go:87] duration metric: took 406.860828ms to configureAuth
	I1121 14:40:16.523672  271969 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:40:16.523845  271969 config.go:182] Loaded profile config "embed-certs-441390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:16.523962  271969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-441390
	I1121 14:40:16.544997  271969 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:16.545284  271969 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33089 <nil> <nil>}
	I1121 14:40:16.545315  271969 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:40:16.910697  271969 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:40:16.910724  271969 machine.go:97] duration metric: took 4.263678022s to provisionDockerMachine
	I1121 14:40:16.910738  271969 start.go:293] postStartSetup for "embed-certs-441390" (driver="docker")
	I1121 14:40:16.910752  271969 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:40:16.910814  271969 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:40:16.910870  271969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-441390
	I1121 14:40:16.932354  271969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/embed-certs-441390/id_rsa Username:docker}
	I1121 14:40:17.035740  271969 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:40:17.039067  271969 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:40:17.039088  271969 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:40:17.039098  271969 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/addons for local assets ...
	I1121 14:40:17.039156  271969 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/files for local assets ...
	I1121 14:40:17.039267  271969 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem -> 145422.pem in /etc/ssl/certs
	I1121 14:40:17.039390  271969 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:40:17.046781  271969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:40:16.944305  269911 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:40:17.028896  269911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:40:17.041751  269911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:40:17.055842  269911 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 14:40:17.055889  269911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:17.065698  269911 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1121 14:40:17.065743  269911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:17.075595  269911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:17.084695  269911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:17.095254  269911 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:40:17.104291  269911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:17.113054  269911 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:17.126693  269911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:17.136240  269911 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:40:17.143278  269911 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:40:17.150161  269911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:40:17.257737  269911 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:40:17.388755  269911 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:40:17.388824  269911 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:40:17.392512  269911 start.go:564] Will wait 60s for crictl version
	I1121 14:40:17.392580  269911 ssh_runner.go:195] Run: which crictl
	I1121 14:40:17.395910  269911 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:40:17.420467  269911 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:40:17.420542  269911 ssh_runner.go:195] Run: crio --version
	I1121 14:40:17.448236  269911 ssh_runner.go:195] Run: crio --version
	I1121 14:40:17.478230  269911 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 14:40:17.063690  271969 start.go:296] duration metric: took 152.940047ms for postStartSetup
	I1121 14:40:17.063771  271969 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:40:17.063811  271969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-441390
	I1121 14:40:17.083574  271969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/embed-certs-441390/id_rsa Username:docker}
	I1121 14:40:17.187814  271969 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:40:17.192957  271969 fix.go:56] duration metric: took 4.918202625s for fixHost
	I1121 14:40:17.192981  271969 start.go:83] releasing machines lock for "embed-certs-441390", held for 4.918259397s
	I1121 14:40:17.193045  271969 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-441390
	I1121 14:40:17.212011  271969 ssh_runner.go:195] Run: cat /version.json
	I1121 14:40:17.212049  271969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-441390
	I1121 14:40:17.212111  271969 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:40:17.212169  271969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-441390
	I1121 14:40:17.230817  271969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/embed-certs-441390/id_rsa Username:docker}
	I1121 14:40:17.231228  271969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/embed-certs-441390/id_rsa Username:docker}
	I1121 14:40:17.408347  271969 ssh_runner.go:195] Run: systemctl --version
	I1121 14:40:17.414941  271969 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:40:17.452713  271969 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:40:17.457195  271969 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:40:17.457252  271969 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:40:17.465190  271969 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 14:40:17.465209  271969 start.go:496] detecting cgroup driver to use...
	I1121 14:40:17.465247  271969 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:40:17.465293  271969 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:40:17.480019  271969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:40:17.491534  271969 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:40:17.491589  271969 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:40:17.507797  271969 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:40:17.520747  271969 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:40:17.615415  271969 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:40:17.700219  271969 docker.go:234] disabling docker service ...
	I1121 14:40:17.700270  271969 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:40:17.715362  271969 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:40:17.728072  271969 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:40:17.830998  271969 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:40:17.966753  271969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:40:17.979051  271969 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:40:17.993094  271969 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 14:40:17.993150  271969 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:18.002403  271969 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1121 14:40:18.002460  271969 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:18.011254  271969 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:18.019357  271969 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:18.027346  271969 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:40:18.034882  271969 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:18.043248  271969 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:18.050844  271969 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:18.058909  271969 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:40:18.065601  271969 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:40:18.072180  271969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:40:18.164652  271969 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:40:18.311176  271969 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:40:18.311238  271969 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:40:18.315773  271969 start.go:564] Will wait 60s for crictl version
	I1121 14:40:18.315824  271969 ssh_runner.go:195] Run: which crictl
	I1121 14:40:18.320079  271969 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:40:18.345467  271969 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:40:18.345545  271969 ssh_runner.go:195] Run: crio --version
	I1121 14:40:18.374280  271969 ssh_runner.go:195] Run: crio --version
	I1121 14:40:18.403430  271969 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 14:40:18.404506  271969 cli_runner.go:164] Run: docker network inspect embed-certs-441390 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:40:18.422257  271969 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1121 14:40:18.426679  271969 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:40:18.436494  271969 kubeadm.go:884] updating cluster {Name:embed-certs-441390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-441390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:40:18.436613  271969 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:40:18.436660  271969 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:40:18.467997  271969 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:40:18.468018  271969 crio.go:433] Images already preloaded, skipping extraction
	I1121 14:40:18.468066  271969 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:40:18.492769  271969 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:40:18.492793  271969 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:40:18.492801  271969 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1121 14:40:18.492914  271969 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-441390 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-441390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:40:18.492987  271969 ssh_runner.go:195] Run: crio config
	I1121 14:40:18.538615  271969 cni.go:84] Creating CNI manager for ""
	I1121 14:40:18.538640  271969 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:40:18.538661  271969 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:40:18.538707  271969 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-441390 NodeName:embed-certs-441390 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:40:18.538912  271969 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-441390"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:40:18.538978  271969 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:40:18.547464  271969 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:40:18.547524  271969 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:40:18.554794  271969 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1121 14:40:18.566577  271969 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:40:18.578183  271969 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1121 14:40:18.589872  271969 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:40:18.593448  271969 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:40:18.603287  271969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:40:18.686670  271969 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:40:18.711683  271969 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/embed-certs-441390 for IP: 192.168.94.2
	I1121 14:40:18.711700  271969 certs.go:195] generating shared ca certs ...
	I1121 14:40:18.711718  271969 certs.go:227] acquiring lock for ca certs: {Name:mkde3a7d6f17b238f06eab3a140993599f1b4367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:18.711878  271969 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key
	I1121 14:40:18.711943  271969 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key
	I1121 14:40:18.711959  271969 certs.go:257] generating profile certs ...
	I1121 14:40:18.712059  271969 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/embed-certs-441390/client.key
	I1121 14:40:18.712167  271969 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/embed-certs-441390/apiserver.key.37f14787
	I1121 14:40:18.712226  271969 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/embed-certs-441390/proxy-client.key
	I1121 14:40:18.712358  271969 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem (1338 bytes)
	W1121 14:40:18.712395  271969 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542_empty.pem, impossibly tiny 0 bytes
	I1121 14:40:18.712407  271969 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:40:18.712444  271969 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:40:18.712480  271969 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:40:18.712508  271969 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem (1679 bytes)
	I1121 14:40:18.712577  271969 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:40:18.713363  271969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:40:18.734014  271969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:40:18.756284  271969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:40:18.779696  271969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1121 14:40:18.805152  271969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/embed-certs-441390/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1121 14:40:18.834328  271969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/embed-certs-441390/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:40:18.860634  271969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/embed-certs-441390/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:40:18.888974  271969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/embed-certs-441390/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1121 14:40:18.911907  271969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem --> /usr/share/ca-certificates/14542.pem (1338 bytes)
	I1121 14:40:18.931075  271969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /usr/share/ca-certificates/145422.pem (1708 bytes)
	I1121 14:40:18.949821  271969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:40:18.968832  271969 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:40:18.981136  271969 ssh_runner.go:195] Run: openssl version
	I1121 14:40:18.987058  271969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145422.pem && ln -fs /usr/share/ca-certificates/145422.pem /etc/ssl/certs/145422.pem"
	I1121 14:40:18.996163  271969 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145422.pem
	I1121 14:40:19.000081  271969 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145422.pem
	I1121 14:40:19.000135  271969 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145422.pem
	I1121 14:40:19.039555  271969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145422.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:40:19.047016  271969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:40:19.054790  271969 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:19.058309  271969 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:19.058355  271969 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:19.097346  271969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:40:19.106249  271969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14542.pem && ln -fs /usr/share/ca-certificates/14542.pem /etc/ssl/certs/14542.pem"
	I1121 14:40:19.116363  271969 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14542.pem
	I1121 14:40:19.121656  271969 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14542.pem
	I1121 14:40:19.121720  271969 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14542.pem
	I1121 14:40:19.180058  271969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14542.pem /etc/ssl/certs/51391683.0"
	I1121 14:40:19.189553  271969 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:40:19.194179  271969 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 14:40:19.252291  271969 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 14:40:19.311293  271969 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 14:40:19.390687  271969 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 14:40:19.459679  271969 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 14:40:19.536753  271969 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 14:40:19.603920  271969 kubeadm.go:401] StartCluster: {Name:embed-certs-441390 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-441390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:40:19.604065  271969 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:40:19.604128  271969 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:40:19.654446  271969 cri.go:89] found id: "d77dc478ac3af84688fd0b17964f2958adba526b379a9beee309ee2ce20ef8ab"
	I1121 14:40:19.654465  271969 cri.go:89] found id: "26a9f970518848d1b900fdd0d942efb823d83d328447dae842211d42697b5a1a"
	I1121 14:40:19.654471  271969 cri.go:89] found id: "89526413a6f7420bdb1189dd04428b799769f4b2d2b5cbf920e078cac420b1ac"
	I1121 14:40:19.654475  271969 cri.go:89] found id: "0760d88ca4d91b1009c8a72ad47ebcb1d7dd0be3b46f0aa30647c629d08bd762"
	I1121 14:40:19.654485  271969 cri.go:89] found id: ""
	I1121 14:40:19.654532  271969 ssh_runner.go:195] Run: sudo runc list -f json
	W1121 14:40:19.676666  271969 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:40:19Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:40:19.676725  271969 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:40:19.691939  271969 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 14:40:19.691956  271969 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 14:40:19.691996  271969 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 14:40:19.720829  271969 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:40:19.721303  271969 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-441390" does not appear in /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:40:19.721443  271969 kubeconfig.go:62] /home/jenkins/minikube-integration/21847-11045/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-441390" cluster setting kubeconfig missing "embed-certs-441390" context setting]
	I1121 14:40:19.721802  271969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:19.723332  271969 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 14:40:19.737744  271969 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1121 14:40:19.737796  271969 kubeadm.go:602] duration metric: took 45.832981ms to restartPrimaryControlPlane
	I1121 14:40:19.737809  271969 kubeadm.go:403] duration metric: took 133.89781ms to StartCluster
	I1121 14:40:19.737827  271969 settings.go:142] acquiring lock: {Name:mkb207cf001a407898b2dbfd9fb9b3881f173a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:19.737963  271969 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:40:19.739134  271969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:19.739347  271969 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:40:19.740119  271969 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:40:19.740217  271969 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-441390"
	I1121 14:40:19.740235  271969 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-441390"
	W1121 14:40:19.740243  271969 addons.go:248] addon storage-provisioner should already be in state true
	I1121 14:40:19.740271  271969 host.go:66] Checking if "embed-certs-441390" exists ...
	I1121 14:40:19.740423  271969 config.go:182] Loaded profile config "embed-certs-441390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:19.740483  271969 addons.go:70] Setting dashboard=true in profile "embed-certs-441390"
	I1121 14:40:19.740496  271969 addons.go:239] Setting addon dashboard=true in "embed-certs-441390"
	W1121 14:40:19.740503  271969 addons.go:248] addon dashboard should already be in state true
	I1121 14:40:19.740538  271969 host.go:66] Checking if "embed-certs-441390" exists ...
	I1121 14:40:19.740797  271969 cli_runner.go:164] Run: docker container inspect embed-certs-441390 --format={{.State.Status}}
	I1121 14:40:19.740975  271969 addons.go:70] Setting default-storageclass=true in profile "embed-certs-441390"
	I1121 14:40:19.740995  271969 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-441390"
	I1121 14:40:19.740997  271969 out.go:179] * Verifying Kubernetes components...
	I1121 14:40:19.741586  271969 cli_runner.go:164] Run: docker container inspect embed-certs-441390 --format={{.State.Status}}
	I1121 14:40:19.741823  271969 cli_runner.go:164] Run: docker container inspect embed-certs-441390 --format={{.State.Status}}
	I1121 14:40:19.742650  271969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:40:19.779007  271969 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:40:19.780312  271969 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:19.780378  271969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:40:19.780461  271969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-441390
	I1121 14:40:19.782038  271969 addons.go:239] Setting addon default-storageclass=true in "embed-certs-441390"
	W1121 14:40:19.782063  271969 addons.go:248] addon default-storageclass should already be in state true
	I1121 14:40:19.782089  271969 host.go:66] Checking if "embed-certs-441390" exists ...
	I1121 14:40:19.782554  271969 cli_runner.go:164] Run: docker container inspect embed-certs-441390 --format={{.State.Status}}
	I1121 14:40:19.792239  271969 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1121 14:40:19.793290  271969 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1121 14:40:16.952693  268814 out.go:252]   - Generating certificates and keys ...
	I1121 14:40:16.952799  268814 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:40:16.952913  268814 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:40:17.073528  268814 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:40:17.458855  268814 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:40:17.792185  268814 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:40:17.953394  268814 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:40:18.036643  268814 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:40:18.036884  268814 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-696683] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:40:18.451872  268814 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:40:18.452095  268814 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-696683] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:40:18.629888  268814 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:40:19.069546  268814 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:40:19.189514  268814 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:40:19.190022  268814 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:40:19.407087  268814 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:40:19.695483  268814 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:40:20.176795  268814 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:40:20.325760  268814 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:40:17.479422  269911 cli_runner.go:164] Run: docker network inspect auto-989875 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:40:17.498111  269911 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1121 14:40:17.501935  269911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:40:17.512283  269911 kubeadm.go:884] updating cluster {Name:auto-989875 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-989875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:40:17.512414  269911 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:40:17.512466  269911 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:40:17.544692  269911 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:40:17.544713  269911 crio.go:433] Images already preloaded, skipping extraction
	I1121 14:40:17.544762  269911 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:40:17.578755  269911 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:40:17.578782  269911 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:40:17.578796  269911 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1121 14:40:17.578901  269911 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-989875 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-989875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:40:17.578986  269911 ssh_runner.go:195] Run: crio config
	I1121 14:40:17.630095  269911 cni.go:84] Creating CNI manager for ""
	I1121 14:40:17.630122  269911 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:40:17.630142  269911 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 14:40:17.630169  269911 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-989875 NodeName:auto-989875 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:40:17.630326  269911 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-989875"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:40:17.630389  269911 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:40:17.638733  269911 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:40:17.638794  269911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:40:17.653722  269911 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1121 14:40:17.667766  269911 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:40:17.685663  269911 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1121 14:40:17.697657  269911 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:40:17.701674  269911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:40:17.711205  269911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:40:17.801987  269911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:40:17.825019  269911 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875 for IP: 192.168.103.2
	I1121 14:40:17.825041  269911 certs.go:195] generating shared ca certs ...
	I1121 14:40:17.825062  269911 certs.go:227] acquiring lock for ca certs: {Name:mkde3a7d6f17b238f06eab3a140993599f1b4367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:17.825252  269911 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key
	I1121 14:40:17.825316  269911 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key
	I1121 14:40:17.825327  269911 certs.go:257] generating profile certs ...
	I1121 14:40:17.825394  269911 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/client.key
	I1121 14:40:17.825413  269911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/client.crt with IP's: []
	I1121 14:40:18.762899  269911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/client.crt ...
	I1121 14:40:18.762966  269911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/client.crt: {Name:mk93407ac2c9be9a2269c2bfc2f4d54a6fac965b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:18.763147  269911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/client.key ...
	I1121 14:40:18.763161  269911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/client.key: {Name:mke831d2a735663f703fa3d592505e48ceb909d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:18.763306  269911 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/apiserver.key.4f4c826e
	I1121 14:40:18.763328  269911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/apiserver.crt.4f4c826e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1121 14:40:19.039942  269911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/apiserver.crt.4f4c826e ...
	I1121 14:40:19.039972  269911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/apiserver.crt.4f4c826e: {Name:mk5542cedc0485b0b0ea6814e73400052f8c457f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:19.040147  269911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/apiserver.key.4f4c826e ...
	I1121 14:40:19.040163  269911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/apiserver.key.4f4c826e: {Name:mk07aea87a5d80bc184ca5cd7b5931807067b484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:19.040255  269911 certs.go:382] copying /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/apiserver.crt.4f4c826e -> /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/apiserver.crt
	I1121 14:40:19.040363  269911 certs.go:386] copying /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/apiserver.key.4f4c826e -> /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/apiserver.key
	I1121 14:40:19.040464  269911 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/proxy-client.key
	I1121 14:40:19.040483  269911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/proxy-client.crt with IP's: []
	I1121 14:40:19.417892  269911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/proxy-client.crt ...
	I1121 14:40:19.417928  269911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/proxy-client.crt: {Name:mk4c7c247984b97ef9fe84a2a7ca012470e760ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:19.418104  269911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/proxy-client.key ...
	I1121 14:40:19.418122  269911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/proxy-client.key: {Name:mkf39cf617e322979d6dac274b096285f777a408 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:19.418378  269911 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem (1338 bytes)
	W1121 14:40:19.418427  269911 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542_empty.pem, impossibly tiny 0 bytes
	I1121 14:40:19.418444  269911 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:40:19.418483  269911 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:40:19.418517  269911 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:40:19.418582  269911 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem (1679 bytes)
	I1121 14:40:19.418644  269911 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:40:19.419438  269911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:40:19.443178  269911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:40:19.487614  269911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:40:19.528980  269911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1121 14:40:19.553920  269911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1121 14:40:19.578541  269911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:40:19.601860  269911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:40:19.627888  269911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/auto-989875/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:40:19.658965  269911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem --> /usr/share/ca-certificates/14542.pem (1338 bytes)
	I1121 14:40:19.688404  269911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /usr/share/ca-certificates/145422.pem (1708 bytes)
	I1121 14:40:19.723932  269911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:40:19.790788  269911 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:40:19.831062  269911 ssh_runner.go:195] Run: openssl version
	I1121 14:40:19.854980  269911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:40:19.866304  269911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:19.872461  269911 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:19.872508  269911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:19.932729  269911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:40:19.950042  269911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14542.pem && ln -fs /usr/share/ca-certificates/14542.pem /etc/ssl/certs/14542.pem"
	I1121 14:40:19.963574  269911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14542.pem
	I1121 14:40:19.968856  269911 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14542.pem
	I1121 14:40:19.968982  269911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14542.pem
	I1121 14:40:20.029668  269911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14542.pem /etc/ssl/certs/51391683.0"
	I1121 14:40:20.040759  269911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145422.pem && ln -fs /usr/share/ca-certificates/145422.pem /etc/ssl/certs/145422.pem"
	I1121 14:40:20.052164  269911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145422.pem
	I1121 14:40:20.060472  269911 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145422.pem
	I1121 14:40:20.060555  269911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145422.pem
	I1121 14:40:20.122344  269911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145422.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:40:20.139055  269911 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:40:20.143515  269911 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 14:40:20.143682  269911 kubeadm.go:401] StartCluster: {Name:auto-989875 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-989875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:40:20.143853  269911 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:40:20.143927  269911 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:40:20.201070  269911 cri.go:89] found id: ""
	I1121 14:40:20.201142  269911 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:40:20.215605  269911 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 14:40:20.231446  269911 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 14:40:20.231497  269911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 14:40:20.247158  269911 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 14:40:20.247173  269911 kubeadm.go:158] found existing configuration files:
	
	I1121 14:40:20.247231  269911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 14:40:20.259478  269911 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 14:40:20.259539  269911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 14:40:20.271461  269911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 14:40:20.281945  269911 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 14:40:20.282002  269911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 14:40:20.292059  269911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 14:40:20.309867  269911 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 14:40:20.309926  269911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 14:40:20.320538  269911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 14:40:20.342876  269911 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 14:40:20.342935  269911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 14:40:20.353913  269911 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 14:40:20.420183  269911 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:40:20.420261  269911 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:40:20.462938  269911 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:40:20.463095  269911 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:40:20.463187  269911 kubeadm.go:319] OS: Linux
	I1121 14:40:20.463276  269911 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:40:20.463396  269911 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:40:20.463491  269911 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:40:20.463618  269911 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:40:20.463740  269911 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:40:20.463811  269911 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:40:20.463888  269911 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:40:20.463954  269911 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:40:20.554965  269911 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:40:20.555207  269911 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:40:20.555354  269911 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:40:20.568717  269911 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:40:21.401011  268814 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:40:21.401731  268814 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:40:21.405363  268814 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:40:20.573647  269911 out.go:252]   - Generating certificates and keys ...
	I1121 14:40:20.573749  269911 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:40:20.573843  269911 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:40:20.884905  269911 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:40:21.249301  269911 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:40:21.575994  269911 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:40:19.796003  271969 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1121 14:40:19.796038  271969 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1121 14:40:19.796103  271969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-441390
	I1121 14:40:19.826699  271969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/embed-certs-441390/id_rsa Username:docker}
	I1121 14:40:19.829720  271969 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:19.829793  271969 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:40:19.829884  271969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-441390
	I1121 14:40:19.839414  271969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/embed-certs-441390/id_rsa Username:docker}
	I1121 14:40:19.861672  271969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/embed-certs-441390/id_rsa Username:docker}
	I1121 14:40:20.009272  271969 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:40:20.038407  271969 node_ready.go:35] waiting up to 6m0s for node "embed-certs-441390" to be "Ready" ...
	I1121 14:40:20.045029  271969 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1121 14:40:20.045091  271969 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1121 14:40:20.064102  271969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:20.085939  271969 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1121 14:40:20.085955  271969 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1121 14:40:20.091309  271969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:20.109317  271969 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1121 14:40:20.109348  271969 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1121 14:40:20.163126  271969 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1121 14:40:20.163146  271969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1121 14:40:20.250046  271969 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1121 14:40:20.250065  271969 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1121 14:40:20.275740  271969 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1121 14:40:20.275813  271969 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1121 14:40:20.299441  271969 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1121 14:40:20.299458  271969 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1121 14:40:20.321481  271969 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1121 14:40:20.321499  271969 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1121 14:40:20.351060  271969 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1121 14:40:20.351084  271969 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1121 14:40:20.384501  271969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1121 14:40:21.773550  271969 node_ready.go:49] node "embed-certs-441390" is "Ready"
	I1121 14:40:21.773714  271969 node_ready.go:38] duration metric: took 1.735266114s for node "embed-certs-441390" to be "Ready" ...
	I1121 14:40:21.773746  271969 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:40:21.773824  271969 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:40:22.522284  271969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.458148284s)
	I1121 14:40:22.522351  271969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.431011935s)
	I1121 14:40:22.522498  271969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.137947919s)
	I1121 14:40:22.522523  271969 api_server.go:72] duration metric: took 2.7831313s to wait for apiserver process to appear ...
	I1121 14:40:22.522537  271969 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:40:22.522553  271969 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1121 14:40:22.524065  271969 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-441390 addons enable metrics-server
	
	I1121 14:40:22.529857  271969 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 14:40:22.529896  271969 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 14:40:22.534628  271969 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1121 14:40:24.885247  266798 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:40:24.885325  266798 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:40:24.885437  266798 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:40:24.885530  266798 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:40:24.885616  266798 kubeadm.go:319] OS: Linux
	I1121 14:40:24.885696  266798 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:40:24.885773  266798 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:40:24.885847  266798 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:40:24.885901  266798 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:40:24.885959  266798 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:40:24.886030  266798 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:40:24.886106  266798 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:40:24.886167  266798 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:40:24.886267  266798 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:40:24.886425  266798 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:40:24.886567  266798 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:40:24.886671  266798 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:40:24.888184  266798 out.go:252]   - Generating certificates and keys ...
	I1121 14:40:24.888286  266798 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:40:24.888406  266798 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:40:24.888508  266798 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:40:24.888620  266798 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:40:24.888707  266798 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:40:24.888771  266798 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:40:24.888855  266798 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:40:24.889023  266798 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-859276 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 14:40:24.889067  266798 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:40:24.889271  266798 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-859276 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1121 14:40:24.889394  266798 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:40:24.889507  266798 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:40:24.889612  266798 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:40:24.889689  266798 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:40:24.889754  266798 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:40:24.889849  266798 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:40:24.889927  266798 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:40:24.890023  266798 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:40:24.890072  266798 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:40:24.890177  266798 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:40:24.890282  266798 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:40:24.891790  266798 out.go:252]   - Booting up control plane ...
	I1121 14:40:24.891911  266798 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:40:24.892019  266798 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:40:24.892124  266798 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:40:24.892273  266798 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:40:24.892390  266798 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:40:24.892538  266798 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:40:24.892644  266798 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:40:24.892702  266798 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:40:24.892835  266798 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:40:24.892916  266798 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:40:24.892982  266798 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 500.993627ms
	I1121 14:40:24.893106  266798 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:40:24.893195  266798 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1121 14:40:24.893308  266798 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:40:24.893428  266798 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:40:24.893581  266798 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.927212193s
	I1121 14:40:24.893673  266798 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.250097697s
	I1121 14:40:24.893738  266798 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001613588s
	I1121 14:40:24.893832  266798 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:40:24.893945  266798 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:40:24.893997  266798 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:40:24.894167  266798 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-859276 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:40:24.894217  266798 kubeadm.go:319] [bootstrap-token] Using token: 043ge9.q9ssu3o0tccyu5s9
	I1121 14:40:24.895474  266798 out.go:252]   - Configuring RBAC rules ...
	I1121 14:40:24.895592  266798 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:40:24.895677  266798 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:40:24.895897  266798 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:40:24.896078  266798 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:40:24.896194  266798 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:40:24.896323  266798 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:40:24.896501  266798 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:40:24.896578  266798 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:40:24.896639  266798 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:40:24.896646  266798 kubeadm.go:319] 
	I1121 14:40:24.896726  266798 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:40:24.896733  266798 kubeadm.go:319] 
	I1121 14:40:24.896827  266798 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:40:24.896837  266798 kubeadm.go:319] 
	I1121 14:40:24.896900  266798 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:40:24.896988  266798 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:40:24.897035  266798 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:40:24.897043  266798 kubeadm.go:319] 
	I1121 14:40:24.897119  266798 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:40:24.897128  266798 kubeadm.go:319] 
	I1121 14:40:24.897200  266798 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:40:24.897206  266798 kubeadm.go:319] 
	I1121 14:40:24.897273  266798 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:40:24.897386  266798 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:40:24.897483  266798 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:40:24.897494  266798 kubeadm.go:319] 
	I1121 14:40:24.897588  266798 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:40:24.897697  266798 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:40:24.897706  266798 kubeadm.go:319] 
	I1121 14:40:24.897817  266798 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 043ge9.q9ssu3o0tccyu5s9 \
	I1121 14:40:24.897929  266798 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f61f1a5a9a2c6e402420e419bcf82211dd9cf42c2d71b101000a986289f66d60 \
	I1121 14:40:24.897956  266798 kubeadm.go:319] 	--control-plane 
	I1121 14:40:24.897963  266798 kubeadm.go:319] 
	I1121 14:40:24.898068  266798 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:40:24.898084  266798 kubeadm.go:319] 
	I1121 14:40:24.898193  266798 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 043ge9.q9ssu3o0tccyu5s9 \
	I1121 14:40:24.898308  266798 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f61f1a5a9a2c6e402420e419bcf82211dd9cf42c2d71b101000a986289f66d60 
	I1121 14:40:24.898326  266798 cni.go:84] Creating CNI manager for ""
	I1121 14:40:24.898337  266798 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:40:24.900208  266798 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:40:24.901293  266798 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:40:24.906111  266798 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:40:24.906129  266798 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:40:24.919941  266798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:40:25.215640  266798 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:40:25.215839  266798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:25.216236  266798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-859276 minikube.k8s.io/updated_at=2025_11_21T14_40_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=default-k8s-diff-port-859276 minikube.k8s.io/primary=true
	I1121 14:40:25.228406  266798 ops.go:34] apiserver oom_adj: -16
	I1121 14:40:25.298545  266798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:21.406613  268814 out.go:252]   - Booting up control plane ...
	I1121 14:40:21.406745  268814 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:40:21.406877  268814 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:40:21.408119  268814 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:40:21.426642  268814 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:40:21.426764  268814 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:40:21.442680  268814 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:40:21.442903  268814 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:40:21.443038  268814 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:40:21.639292  268814 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:40:21.639470  268814 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:40:22.639142  268814 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001769125s
	I1121 14:40:22.642535  268814 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:40:22.642707  268814 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1121 14:40:22.642863  268814 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:40:22.642994  268814 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:40:24.291743  268814 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.649038617s
	I1121 14:40:24.987291  268814 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.344693286s
	I1121 14:40:26.645258  268814 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001498481s
	I1121 14:40:26.659986  268814 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:40:26.672373  268814 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:40:26.682970  268814 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:40:26.683323  268814 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-696683 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:40:26.697111  268814 kubeadm.go:319] [bootstrap-token] Using token: rfxii8.1brv72wluqqatilb
	I1121 14:40:22.189977  269911 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:40:22.400363  269911 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:40:22.400541  269911 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-989875 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1121 14:40:22.873147  269911 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:40:22.873326  269911 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-989875 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1121 14:40:23.011332  269911 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:40:23.550771  269911 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:40:24.062075  269911 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:40:24.062192  269911 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:40:24.573981  269911 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:40:24.870396  269911 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:40:26.082512  269911 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:40:26.654933  269911 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:40:26.694604  269911 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:40:26.695657  269911 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:40:26.700805  269911 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:40:26.703321  269911 out.go:252]   - Booting up control plane ...
	I1121 14:40:26.703546  269911 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:40:26.703681  269911 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:40:26.703781  269911 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:40:26.724318  269911 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:40:26.724608  269911 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:40:26.733096  269911 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:40:26.733708  269911 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:40:26.733850  269911 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:40:22.535637  271969 addons.go:530] duration metric: took 2.795522782s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1121 14:40:23.022649  271969 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1121 14:40:23.035992  271969 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 14:40:23.036017  271969 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 14:40:23.523638  271969 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1121 14:40:23.529127  271969 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1121 14:40:23.530279  271969 api_server.go:141] control plane version: v1.34.1
	I1121 14:40:23.530304  271969 api_server.go:131] duration metric: took 1.007760897s to wait for apiserver health ...
	I1121 14:40:23.530314  271969 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:40:23.534303  271969 system_pods.go:59] 8 kube-system pods found
	I1121 14:40:23.534333  271969 system_pods.go:61] "coredns-66bc5c9577-sbjhs" [c780507f-61e0-418f-9033-a7e40d5df9ab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:23.534346  271969 system_pods.go:61] "etcd-embed-certs-441390" [f735d6ec-b023-4def-83f7-700f537ed8b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 14:40:23.534353  271969 system_pods.go:61] "kindnet-pg6qj" [200232c6-7d1f-4ad2-acdf-473aa5ca42aa] Running
	I1121 14:40:23.534365  271969 system_pods.go:61] "kube-apiserver-embed-certs-441390" [295f86da-cbfe-480a-b4d7-dfd48c384d70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 14:40:23.534373  271969 system_pods.go:61] "kube-controller-manager-embed-certs-441390" [bcfa7981-1309-400e-9d9f-2ed120b91df8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 14:40:23.534379  271969 system_pods.go:61] "kube-proxy-m2nzt" [50058869-6257-4b96-ab7b-53f1b6ebfa85] Running
	I1121 14:40:23.534386  271969 system_pods.go:61] "kube-scheduler-embed-certs-441390" [256074fe-71ec-4e72-979f-9cd68d7ad690] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 14:40:23.534391  271969 system_pods.go:61] "storage-provisioner" [2fa17547-fba1-43c4-bb71-c384dd1036aa] Running
	I1121 14:40:23.534398  271969 system_pods.go:74] duration metric: took 4.078886ms to wait for pod list to return data ...
	I1121 14:40:23.534414  271969 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:40:23.536741  271969 default_sa.go:45] found service account: "default"
	I1121 14:40:23.536761  271969 default_sa.go:55] duration metric: took 2.341144ms for default service account to be created ...
	I1121 14:40:23.536770  271969 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:40:23.539940  271969 system_pods.go:86] 8 kube-system pods found
	I1121 14:40:23.539967  271969 system_pods.go:89] "coredns-66bc5c9577-sbjhs" [c780507f-61e0-418f-9033-a7e40d5df9ab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:23.539976  271969 system_pods.go:89] "etcd-embed-certs-441390" [f735d6ec-b023-4def-83f7-700f537ed8b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 14:40:23.539982  271969 system_pods.go:89] "kindnet-pg6qj" [200232c6-7d1f-4ad2-acdf-473aa5ca42aa] Running
	I1121 14:40:23.539991  271969 system_pods.go:89] "kube-apiserver-embed-certs-441390" [295f86da-cbfe-480a-b4d7-dfd48c384d70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 14:40:23.540002  271969 system_pods.go:89] "kube-controller-manager-embed-certs-441390" [bcfa7981-1309-400e-9d9f-2ed120b91df8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 14:40:23.540008  271969 system_pods.go:89] "kube-proxy-m2nzt" [50058869-6257-4b96-ab7b-53f1b6ebfa85] Running
	I1121 14:40:23.540018  271969 system_pods.go:89] "kube-scheduler-embed-certs-441390" [256074fe-71ec-4e72-979f-9cd68d7ad690] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 14:40:23.540027  271969 system_pods.go:89] "storage-provisioner" [2fa17547-fba1-43c4-bb71-c384dd1036aa] Running
	I1121 14:40:23.540036  271969 system_pods.go:126] duration metric: took 3.259843ms to wait for k8s-apps to be running ...
	I1121 14:40:23.540044  271969 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:40:23.540103  271969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:40:23.555663  271969 system_svc.go:56] duration metric: took 15.609713ms WaitForService to wait for kubelet
	I1121 14:40:23.555683  271969 kubeadm.go:587] duration metric: took 3.816292179s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:40:23.555701  271969 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:40:23.558166  271969 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:40:23.558191  271969 node_conditions.go:123] node cpu capacity is 8
	I1121 14:40:23.558205  271969 node_conditions.go:105] duration metric: took 2.498728ms to run NodePressure ...
	I1121 14:40:23.558219  271969 start.go:242] waiting for startup goroutines ...
	I1121 14:40:23.558229  271969 start.go:247] waiting for cluster config update ...
	I1121 14:40:23.558246  271969 start.go:256] writing updated cluster config ...
	I1121 14:40:23.558504  271969 ssh_runner.go:195] Run: rm -f paused
	I1121 14:40:23.562223  271969 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:40:23.564928  271969 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sbjhs" in "kube-system" namespace to be "Ready" or be gone ...
	W1121 14:40:25.569843  271969 pod_ready.go:104] pod "coredns-66bc5c9577-sbjhs" is not "Ready", error: <nil>
	I1121 14:40:26.699246  268814 out.go:252]   - Configuring RBAC rules ...
	I1121 14:40:26.699389  268814 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:40:26.704256  268814 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:40:26.711615  268814 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:40:26.713891  268814 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:40:26.716734  268814 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:40:26.720406  268814 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:40:27.050268  268814 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:40:27.465833  268814 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:40:28.078693  268814 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:40:28.080062  268814 kubeadm.go:319] 
	I1121 14:40:28.080182  268814 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:40:28.080203  268814 kubeadm.go:319] 
	I1121 14:40:28.080340  268814 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:40:28.080354  268814 kubeadm.go:319] 
	I1121 14:40:28.080407  268814 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:40:28.080512  268814 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:40:28.080597  268814 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:40:28.080613  268814 kubeadm.go:319] 
	I1121 14:40:28.080706  268814 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:40:28.080722  268814 kubeadm.go:319] 
	I1121 14:40:28.080788  268814 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:40:28.080802  268814 kubeadm.go:319] 
	I1121 14:40:28.080884  268814 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:40:28.081002  268814 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:40:28.081112  268814 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:40:28.081146  268814 kubeadm.go:319] 
	I1121 14:40:28.081273  268814 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:40:28.081481  268814 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:40:28.081510  268814 kubeadm.go:319] 
	I1121 14:40:28.081664  268814 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token rfxii8.1brv72wluqqatilb \
	I1121 14:40:28.081814  268814 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f61f1a5a9a2c6e402420e419bcf82211dd9cf42c2d71b101000a986289f66d60 \
	I1121 14:40:28.081858  268814 kubeadm.go:319] 	--control-plane 
	I1121 14:40:28.081872  268814 kubeadm.go:319] 
	I1121 14:40:28.082002  268814 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:40:28.082012  268814 kubeadm.go:319] 
	I1121 14:40:28.082143  268814 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token rfxii8.1brv72wluqqatilb \
	I1121 14:40:28.082296  268814 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f61f1a5a9a2c6e402420e419bcf82211dd9cf42c2d71b101000a986289f66d60 
	I1121 14:40:28.085587  268814 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:40:28.085764  268814 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:40:28.085780  268814 cni.go:84] Creating CNI manager for ""
	I1121 14:40:28.085787  268814 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:40:28.180488  268814 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:40:25.798677  266798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:26.298712  266798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:26.798878  266798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:27.299412  266798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:27.799252  266798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:28.299218  266798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:28.799283  266798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:29.299078  266798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:29.798861  266798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:29.914935  266798 kubeadm.go:1114] duration metric: took 4.699144391s to wait for elevateKubeSystemPrivileges
	I1121 14:40:29.914966  266798 kubeadm.go:403] duration metric: took 17.785748573s to StartCluster
	I1121 14:40:29.914988  266798 settings.go:142] acquiring lock: {Name:mkb207cf001a407898b2dbfd9fb9b3881f173a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:29.915068  266798 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:40:29.918493  266798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:29.918795  266798 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:40:29.918942  266798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:40:29.919214  266798 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:40:29.919311  266798 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-859276"
	I1121 14:40:29.919329  266798 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-859276"
	I1121 14:40:29.919375  266798 host.go:66] Checking if "default-k8s-diff-port-859276" exists ...
	I1121 14:40:29.919686  266798 config.go:182] Loaded profile config "default-k8s-diff-port-859276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:29.919736  266798 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-859276"
	I1121 14:40:29.919816  266798 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-859276"
	I1121 14:40:29.920119  266798 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-859276 --format={{.State.Status}}
	I1121 14:40:29.920665  266798 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-859276 --format={{.State.Status}}
	I1121 14:40:29.923732  266798 out.go:179] * Verifying Kubernetes components...
	I1121 14:40:29.925366  266798 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:40:29.950859  266798 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:40:29.952182  266798 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:29.952675  266798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:40:29.952770  266798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-859276
	I1121 14:40:29.960512  266798 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-859276"
	I1121 14:40:29.960552  266798 host.go:66] Checking if "default-k8s-diff-port-859276" exists ...
	I1121 14:40:29.961027  266798 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-859276 --format={{.State.Status}}
	I1121 14:40:29.986990  266798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/default-k8s-diff-port-859276/id_rsa Username:docker}
	I1121 14:40:29.999751  266798 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:29.999775  266798 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:40:29.999836  266798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-859276
	I1121 14:40:30.035853  266798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/default-k8s-diff-port-859276/id_rsa Username:docker}
	I1121 14:40:30.087646  266798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:40:30.158822  266798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:30.172182  266798 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:40:30.212378  266798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:28.256754  268814 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 14:40:28.261599  268814 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 14:40:28.261619  268814 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 14:40:28.277587  268814 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 14:40:28.639902  268814 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:40:28.640055  268814 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-696683 minikube.k8s.io/updated_at=2025_11_21T14_40_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=newest-cni-696683 minikube.k8s.io/primary=true
	I1121 14:40:28.640110  268814 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:28.800306  268814 ops.go:34] apiserver oom_adj: -16
	I1121 14:40:28.800380  268814 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:29.300954  268814 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:29.800485  268814 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:30.300458  268814 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:30.445980  266798 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1121 14:40:30.690692  266798 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-859276" to be "Ready" ...
	I1121 14:40:30.697446  266798 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:40:26.864954  269911 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:40:26.865112  269911 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:40:27.867205  269911 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002434348s
	I1121 14:40:27.872106  269911 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:40:27.872231  269911 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1121 14:40:27.872379  269911 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:40:27.872508  269911 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:40:30.330626  269911 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.458009238s
	I1121 14:40:31.103954  269911 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.231813227s
	W1121 14:40:27.571881  271969 pod_ready.go:104] pod "coredns-66bc5c9577-sbjhs" is not "Ready", error: <nil>
	W1121 14:40:29.575585  271969 pod_ready.go:104] pod "coredns-66bc5c9577-sbjhs" is not "Ready", error: <nil>
	I1121 14:40:30.074929  271969 pod_ready.go:94] pod "coredns-66bc5c9577-sbjhs" is "Ready"
	I1121 14:40:30.074953  271969 pod_ready.go:86] duration metric: took 6.510008518s for pod "coredns-66bc5c9577-sbjhs" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:30.077930  271969 pod_ready.go:83] waiting for pod "etcd-embed-certs-441390" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:32.874086  269911 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001744671s
	I1121 14:40:32.885984  269911 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:40:32.894830  269911 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:40:32.902974  269911 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:40:32.903206  269911 kubeadm.go:319] [mark-control-plane] Marking the node auto-989875 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:40:32.910189  269911 kubeadm.go:319] [bootstrap-token] Using token: esqw23.cqkv1w2p4wo9xhul
	I1121 14:40:30.800445  268814 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:31.301408  268814 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:31.800698  268814 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:32.300775  268814 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:32.800838  268814 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:33.300631  268814 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:40:33.375657  268814 kubeadm.go:1114] duration metric: took 4.735738076s to wait for elevateKubeSystemPrivileges
	I1121 14:40:33.375695  268814 kubeadm.go:403] duration metric: took 16.712746307s to StartCluster
	I1121 14:40:33.375717  268814 settings.go:142] acquiring lock: {Name:mkb207cf001a407898b2dbfd9fb9b3881f173a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:33.375789  268814 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:40:33.377501  268814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:33.377789  268814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:40:33.377846  268814 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:40:33.377963  268814 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:40:33.378054  268814 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-696683"
	I1121 14:40:33.378076  268814 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-696683"
	I1121 14:40:33.378088  268814 config.go:182] Loaded profile config "newest-cni-696683": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:33.378105  268814 host.go:66] Checking if "newest-cni-696683" exists ...
	I1121 14:40:33.378137  268814 addons.go:70] Setting default-storageclass=true in profile "newest-cni-696683"
	I1121 14:40:33.378155  268814 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-696683"
	I1121 14:40:33.378498  268814 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:33.378714  268814 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:33.379499  268814 out.go:179] * Verifying Kubernetes components...
	I1121 14:40:33.380682  268814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:40:33.410833  268814 addons.go:239] Setting addon default-storageclass=true in "newest-cni-696683"
	I1121 14:40:33.410883  268814 host.go:66] Checking if "newest-cni-696683" exists ...
	I1121 14:40:33.411118  268814 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:40:33.411351  268814 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:33.412199  268814 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:33.412274  268814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:40:33.412337  268814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:33.444800  268814 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:33.444883  268814 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:40:33.444971  268814 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:33.446903  268814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:33.478299  268814 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:33.491209  268814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:40:33.579844  268814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:40:33.593540  268814 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:33.610144  268814 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:33.738249  268814 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1121 14:40:33.740854  268814 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:40:33.740912  268814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:40:33.918268  268814 api_server.go:72] duration metric: took 540.384608ms to wait for apiserver process to appear ...
	I1121 14:40:33.918296  268814 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:40:33.918317  268814 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:40:33.923531  268814 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1121 14:40:33.924361  268814 api_server.go:141] control plane version: v1.34.1
	I1121 14:40:33.924388  268814 api_server.go:131] duration metric: took 6.084652ms to wait for apiserver health ...
	I1121 14:40:33.924398  268814 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:40:33.926335  268814 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:40:32.911467  269911 out.go:252]   - Configuring RBAC rules ...
	I1121 14:40:32.911639  269911 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:40:32.914393  269911 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:40:32.919915  269911 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:40:32.922237  269911 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:40:32.924832  269911 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:40:32.927228  269911 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:40:33.280374  269911 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:40:33.715243  269911 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:40:34.280591  269911 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:40:34.281470  269911 kubeadm.go:319] 
	I1121 14:40:34.281605  269911 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:40:34.281627  269911 kubeadm.go:319] 
	I1121 14:40:34.281747  269911 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:40:34.281756  269911 kubeadm.go:319] 
	I1121 14:40:34.281797  269911 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:40:34.281886  269911 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:40:34.281967  269911 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:40:34.281986  269911 kubeadm.go:319] 
	I1121 14:40:34.282072  269911 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:40:34.282083  269911 kubeadm.go:319] 
	I1121 14:40:34.282165  269911 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:40:34.282183  269911 kubeadm.go:319] 
	I1121 14:40:34.282268  269911 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:40:34.282377  269911 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:40:34.282478  269911 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:40:34.282488  269911 kubeadm.go:319] 
	I1121 14:40:34.282630  269911 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:40:34.282753  269911 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:40:34.282764  269911 kubeadm.go:319] 
	I1121 14:40:34.282874  269911 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token esqw23.cqkv1w2p4wo9xhul \
	I1121 14:40:34.283009  269911 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f61f1a5a9a2c6e402420e419bcf82211dd9cf42c2d71b101000a986289f66d60 \
	I1121 14:40:34.283053  269911 kubeadm.go:319] 	--control-plane 
	I1121 14:40:34.283068  269911 kubeadm.go:319] 
	I1121 14:40:34.283195  269911 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:40:34.283207  269911 kubeadm.go:319] 
	I1121 14:40:34.283283  269911 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token esqw23.cqkv1w2p4wo9xhul \
	I1121 14:40:34.283372  269911 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f61f1a5a9a2c6e402420e419bcf82211dd9cf42c2d71b101000a986289f66d60 
	I1121 14:40:34.285585  269911 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:40:34.285710  269911 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:40:34.285744  269911 cni.go:84] Creating CNI manager for ""
	I1121 14:40:34.285756  269911 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:40:34.288065  269911 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 14:40:33.928030  268814 system_pods.go:59] 8 kube-system pods found
	I1121 14:40:33.928106  268814 system_pods.go:61] "coredns-66bc5c9577-ncl4f" [93a097a2-31da-4456-8435-e1a976f3d7f9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1121 14:40:33.928129  268814 system_pods.go:61] "etcd-newest-cni-696683" [113e31f1-f22b-4ed8-adcb-8c12d55e1f4c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 14:40:33.928138  268814 system_pods.go:61] "kindnet-m6v5n" [98b995f3-7968-4e19-abc1-10772001bd6c] Running
	I1121 14:40:33.928150  268814 system_pods.go:61] "kube-apiserver-newest-cni-696683" [a046bba0-991c-4291-b89a-a0e64e3686b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 14:40:33.928165  268814 system_pods.go:61] "kube-controller-manager-newest-cni-696683" [dd3689f1-9ccf-4bca-8147-1779d92c3598] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 14:40:33.928176  268814 system_pods.go:61] "kube-proxy-2dkdg" [13ba7b82-bf92-4b76-a812-685c12ecb21c] Running
	I1121 14:40:33.928186  268814 system_pods.go:61] "kube-scheduler-newest-cni-696683" [57fd312e-bc77-4ecb-9f3b-caa50247e033] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 14:40:33.928198  268814 system_pods.go:61] "storage-provisioner" [3cf44ed4-4cd8-4655-aef5-38415eb66de4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1121 14:40:33.928205  268814 system_pods.go:74] duration metric: took 3.800716ms to wait for pod list to return data ...
	I1121 14:40:33.928258  268814 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:40:33.928870  268814 addons.go:530] duration metric: took 550.902867ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:40:33.930596  268814 default_sa.go:45] found service account: "default"
	I1121 14:40:33.930616  268814 default_sa.go:55] duration metric: took 2.33066ms for default service account to be created ...
	I1121 14:40:33.930627  268814 kubeadm.go:587] duration metric: took 552.749905ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1121 14:40:33.930644  268814 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:40:33.932748  268814 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:40:33.932773  268814 node_conditions.go:123] node cpu capacity is 8
	I1121 14:40:33.932789  268814 node_conditions.go:105] duration metric: took 2.140536ms to run NodePressure ...
	I1121 14:40:33.932803  268814 start.go:242] waiting for startup goroutines ...
	I1121 14:40:34.247241  268814 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-696683" context rescaled to 1 replicas
	I1121 14:40:34.247274  268814 start.go:247] waiting for cluster config update ...
	I1121 14:40:34.247287  268814 start.go:256] writing updated cluster config ...
	I1121 14:40:34.247601  268814 ssh_runner.go:195] Run: rm -f paused
	I1121 14:40:34.295782  268814 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:40:34.297509  268814 out.go:179] * Done! kubectl is now configured to use "newest-cni-696683" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.093106345Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.096918343Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=46fff957-91a4-47cd-a4b3-6d0824f1c0a7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.097252333Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=391dd3c0-0260-462d-9dde-5da3c776be64 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.098262537Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.09887556Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.098982032Z" level=info msg="Ran pod sandbox ada466af3effa0363ead1edcc22f8d06e5ddc8c8d76206e8a02c8aee876f65c9 with infra container: kube-system/kube-proxy-2dkdg/POD" id=46fff957-91a4-47cd-a4b3-6d0824f1c0a7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.09974475Z" level=info msg="Ran pod sandbox 032a8c14dbcb7492b64030393a6468c5df2f5e43224552cd46bd889cc04945dc with infra container: kube-system/kindnet-m6v5n/POD" id=391dd3c0-0260-462d-9dde-5da3c776be64 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.100090627Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d7c5efaf-7dc8-47d7-a338-6c4717ab7b0f name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.100714419Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=032a0300-480d-48b1-89ba-2260a3d9f9f7 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.101137383Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=86602b88-4669-4760-ac7b-cd82fc5a2000 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.101507949Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=6ba8055e-eebc-4515-a184-707251f3cc9e name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.104350427Z" level=info msg="Creating container: kube-system/kube-proxy-2dkdg/kube-proxy" id=b0b53b48-e374-4fa3-82be-998fd92cd482 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.104481679Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.105595005Z" level=info msg="Creating container: kube-system/kindnet-m6v5n/kindnet-cni" id=73d74e46-d651-4f59-9273-63f3edaaa2d4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.105684607Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.110352664Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.110786236Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.111148199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.111673376Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.145372376Z" level=info msg="Created container 42ac2fa965e9235ef655f1b6398c887d5ead07a11db51414983f2490620b5fb6: kube-system/kindnet-m6v5n/kindnet-cni" id=73d74e46-d651-4f59-9273-63f3edaaa2d4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.145971527Z" level=info msg="Starting container: 42ac2fa965e9235ef655f1b6398c887d5ead07a11db51414983f2490620b5fb6" id=8fb699ba-9ac8-464b-b22d-d72e69e22d66 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.148065738Z" level=info msg="Started container" PID=1499 containerID=42ac2fa965e9235ef655f1b6398c887d5ead07a11db51414983f2490620b5fb6 description=kube-system/kindnet-m6v5n/kindnet-cni id=8fb699ba-9ac8-464b-b22d-d72e69e22d66 name=/runtime.v1.RuntimeService/StartContainer sandboxID=032a8c14dbcb7492b64030393a6468c5df2f5e43224552cd46bd889cc04945dc
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.149430236Z" level=info msg="Created container e8337eda6a0f36d94741325866231606d9d66bede2ddf3d1a4e8618d0c0afd97: kube-system/kube-proxy-2dkdg/kube-proxy" id=b0b53b48-e374-4fa3-82be-998fd92cd482 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.149971972Z" level=info msg="Starting container: e8337eda6a0f36d94741325866231606d9d66bede2ddf3d1a4e8618d0c0afd97" id=60dcfed8-fb64-431d-976a-a6ebda288775 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:40:33 newest-cni-696683 crio[776]: time="2025-11-21T14:40:33.153379066Z" level=info msg="Started container" PID=1500 containerID=e8337eda6a0f36d94741325866231606d9d66bede2ddf3d1a4e8618d0c0afd97 description=kube-system/kube-proxy-2dkdg/kube-proxy id=60dcfed8-fb64-431d-976a-a6ebda288775 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ada466af3effa0363ead1edcc22f8d06e5ddc8c8d76206e8a02c8aee876f65c9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	42ac2fa965e92       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   2 seconds ago       Running             kindnet-cni               0                   032a8c14dbcb7       kindnet-m6v5n                               kube-system
	e8337eda6a0f3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   2 seconds ago       Running             kube-proxy                0                   ada466af3effa       kube-proxy-2dkdg                            kube-system
	86865ccd46c03       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   a68986c065cbd       kube-apiserver-newest-cni-696683            kube-system
	7cb0d95695dc6       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   722739c2b8e77       kube-controller-manager-newest-cni-696683   kube-system
	9c21db0194d31       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   ffd45f615a668       kube-scheduler-newest-cni-696683            kube-system
	51b01dfadcba6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   9fef35b3f1260       etcd-newest-cni-696683                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-696683
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-696683
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=newest-cni-696683
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_40_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:40:24 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-696683
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:40:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:40:27 +0000   Fri, 21 Nov 2025 14:40:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:40:27 +0000   Fri, 21 Nov 2025 14:40:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:40:27 +0000   Fri, 21 Nov 2025 14:40:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 21 Nov 2025 14:40:27 +0000   Fri, 21 Nov 2025 14:40:23 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-696683
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                eb56864d-718a-4ff0-98f9-3e18a790b305
	  Boot ID:                    4deed74e-d1ae-403f-b303-7338c681df31
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-696683                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-m6v5n                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-696683             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-696683    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-2dkdg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-696683             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  13s (x8 over 13s)  kubelet          Node newest-cni-696683 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet          Node newest-cni-696683 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x8 over 13s)  kubelet          Node newest-cni-696683 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-696683 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-696683 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s                 kubelet          Node newest-cni-696683 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-696683 event: Registered Node newest-cni-696683 in Controller
	
	
	==> dmesg <==
	[  +0.087005] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024870] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.407014] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 13:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.052324] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023964] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +2.047715] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +4.031557] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +8.511113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[ +16.382337] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[Nov21 13:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	
	
	==> etcd [51b01dfadcba642c06d46cc68ba918b46bb80bb3aa66c9a14e4a849c3e957895] <==
	{"level":"warn","ts":"2025-11-21T14:40:24.273727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.283416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.290828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.305200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.309516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.317334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.326414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.334075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.341999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.358878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.368068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.375269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.382346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.389287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.395196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.401855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.408506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.415464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.423240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.429395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.436109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.443837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.458816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.466284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:24.473473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51814","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:40:35 up  1:23,  0 user,  load average: 4.58, 2.98, 1.90
	Linux newest-cni-696683 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [42ac2fa965e9235ef655f1b6398c887d5ead07a11db51414983f2490620b5fb6] <==
	I1121 14:40:33.382850       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:40:33.383263       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 14:40:33.383412       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:40:33.383436       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:40:33.383469       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:40:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:40:33.682648       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:40:33.682691       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:40:33.682715       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:40:33.683192       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:40:34.082506       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:40:34.082540       1 metrics.go:72] Registering metrics
	I1121 14:40:34.082623       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [86865ccd46c03f39562c2107ce5469a8d5c6ee989c72244770eb1bcf598f5092] <==
	I1121 14:40:25.031162       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1121 14:40:25.031213       1 aggregator.go:171] initial CRD sync complete...
	I1121 14:40:25.031226       1 autoregister_controller.go:144] Starting autoregister controller
	I1121 14:40:25.031233       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:40:25.031238       1 cache.go:39] Caches are synced for autoregister controller
	I1121 14:40:25.034594       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:40:25.034686       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:40:25.224992       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:40:25.928671       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:40:25.932100       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:40:25.932114       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:40:26.410655       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:40:26.472067       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:40:26.532836       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:40:26.539325       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1121 14:40:26.541001       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:40:26.546553       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:40:26.972933       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:40:27.456350       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:40:27.464961       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:40:27.474555       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:40:32.772145       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1121 14:40:32.873388       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:40:32.877734       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:40:33.021600       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [7cb0d95695dc63dc3724169b29b82de0af58773633d564793b94a60a3236ece7] <==
	I1121 14:40:31.969595       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:40:31.969778       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 14:40:31.970431       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1121 14:40:31.970470       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 14:40:31.971610       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 14:40:31.971664       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 14:40:31.971939       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 14:40:31.971967       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:40:31.972847       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1121 14:40:31.972933       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1121 14:40:31.976160       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 14:40:31.977962       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:40:31.979034       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1121 14:40:31.979118       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1121 14:40:31.979172       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1121 14:40:31.979187       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1121 14:40:31.979195       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1121 14:40:31.982356       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:40:31.982385       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 14:40:31.983640       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 14:40:31.986680       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-696683" podCIDRs=["10.42.0.0/24"]
	I1121 14:40:31.989676       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 14:40:31.994106       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 14:40:31.998511       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 14:40:32.004323       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [e8337eda6a0f36d94741325866231606d9d66bede2ddf3d1a4e8618d0c0afd97] <==
	I1121 14:40:33.193614       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:40:33.267389       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:40:33.367621       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:40:33.367660       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 14:40:33.367789       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:40:33.398256       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:40:33.398330       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:40:33.414911       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:40:33.415387       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:40:33.415847       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:40:33.417509       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:40:33.417538       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:40:33.417592       1 config.go:200] "Starting service config controller"
	I1121 14:40:33.417606       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:40:33.417645       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:40:33.417658       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:40:33.417759       1 config.go:309] "Starting node config controller"
	I1121 14:40:33.417797       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:40:33.417823       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:40:33.517655       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:40:33.517666       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:40:33.517747       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9c21db0194d317481149e7f357e45f49cc5d16f7daab3c16c3a79158fce429df] <==
	E1121 14:40:24.984111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:40:24.984373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:40:24.984504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:40:24.984525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:40:24.984524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:40:24.984619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:40:24.984717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:40:24.984726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:40:24.984782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:40:24.984788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:40:24.984794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:40:24.984951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:40:24.984951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:40:25.816845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:40:25.844988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:40:25.870385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:40:25.887816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:40:25.949921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:40:26.058883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:40:26.103659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:40:26.122428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:40:26.130745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:40:26.133786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:40:26.173047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1121 14:40:26.581160       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:40:27 newest-cni-696683 kubelet[1302]: I1121 14:40:27.527407    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64babaa1f2c755d7f5872459a4ebe884-etc-ca-certificates\") pod \"kube-controller-manager-newest-cni-696683\" (UID: \"64babaa1f2c755d7f5872459a4ebe884\") " pod="kube-system/kube-controller-manager-newest-cni-696683"
	Nov 21 14:40:27 newest-cni-696683 kubelet[1302]: I1121 14:40:27.527429    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/64babaa1f2c755d7f5872459a4ebe884-flexvolume-dir\") pod \"kube-controller-manager-newest-cni-696683\" (UID: \"64babaa1f2c755d7f5872459a4ebe884\") " pod="kube-system/kube-controller-manager-newest-cni-696683"
	Nov 21 14:40:28 newest-cni-696683 kubelet[1302]: I1121 14:40:28.314312    1302 apiserver.go:52] "Watching apiserver"
	Nov 21 14:40:28 newest-cni-696683 kubelet[1302]: I1121 14:40:28.324223    1302 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 21 14:40:28 newest-cni-696683 kubelet[1302]: I1121 14:40:28.387555    1302 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-696683"
	Nov 21 14:40:28 newest-cni-696683 kubelet[1302]: I1121 14:40:28.387916    1302 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-696683"
	Nov 21 14:40:28 newest-cni-696683 kubelet[1302]: I1121 14:40:28.388237    1302 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-696683"
	Nov 21 14:40:28 newest-cni-696683 kubelet[1302]: E1121 14:40:28.410098    1302 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-696683\" already exists" pod="kube-system/kube-apiserver-newest-cni-696683"
	Nov 21 14:40:28 newest-cni-696683 kubelet[1302]: E1121 14:40:28.414247    1302 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-696683\" already exists" pod="kube-system/kube-scheduler-newest-cni-696683"
	Nov 21 14:40:28 newest-cni-696683 kubelet[1302]: E1121 14:40:28.416984    1302 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-696683\" already exists" pod="kube-system/etcd-newest-cni-696683"
	Nov 21 14:40:28 newest-cni-696683 kubelet[1302]: I1121 14:40:28.493235    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-696683" podStartSLOduration=1.49320994 podStartE2EDuration="1.49320994s" podCreationTimestamp="2025-11-21 14:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:40:28.477524878 +0000 UTC m=+1.241158439" watchObservedRunningTime="2025-11-21 14:40:28.49320994 +0000 UTC m=+1.256843505"
	Nov 21 14:40:28 newest-cni-696683 kubelet[1302]: I1121 14:40:28.503783    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-696683" podStartSLOduration=1.503762327 podStartE2EDuration="1.503762327s" podCreationTimestamp="2025-11-21 14:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:40:28.494658623 +0000 UTC m=+1.258292207" watchObservedRunningTime="2025-11-21 14:40:28.503762327 +0000 UTC m=+1.267395895"
	Nov 21 14:40:28 newest-cni-696683 kubelet[1302]: I1121 14:40:28.512820    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-696683" podStartSLOduration=1.512801433 podStartE2EDuration="1.512801433s" podCreationTimestamp="2025-11-21 14:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:40:28.512754683 +0000 UTC m=+1.276388253" watchObservedRunningTime="2025-11-21 14:40:28.512801433 +0000 UTC m=+1.276435001"
	Nov 21 14:40:28 newest-cni-696683 kubelet[1302]: I1121 14:40:28.512904    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-696683" podStartSLOduration=1.512896823 podStartE2EDuration="1.512896823s" podCreationTimestamp="2025-11-21 14:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:40:28.504319034 +0000 UTC m=+1.267952602" watchObservedRunningTime="2025-11-21 14:40:28.512896823 +0000 UTC m=+1.276530389"
	Nov 21 14:40:32 newest-cni-696683 kubelet[1302]: I1121 14:40:32.005519    1302 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 21 14:40:32 newest-cni-696683 kubelet[1302]: I1121 14:40:32.006627    1302 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 21 14:40:32 newest-cni-696683 kubelet[1302]: I1121 14:40:32.865375    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/13ba7b82-bf92-4b76-a812-685c12ecb21c-kube-proxy\") pod \"kube-proxy-2dkdg\" (UID: \"13ba7b82-bf92-4b76-a812-685c12ecb21c\") " pod="kube-system/kube-proxy-2dkdg"
	Nov 21 14:40:32 newest-cni-696683 kubelet[1302]: I1121 14:40:32.865429    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13ba7b82-bf92-4b76-a812-685c12ecb21c-xtables-lock\") pod \"kube-proxy-2dkdg\" (UID: \"13ba7b82-bf92-4b76-a812-685c12ecb21c\") " pod="kube-system/kube-proxy-2dkdg"
	Nov 21 14:40:32 newest-cni-696683 kubelet[1302]: I1121 14:40:32.865462    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13ba7b82-bf92-4b76-a812-685c12ecb21c-lib-modules\") pod \"kube-proxy-2dkdg\" (UID: \"13ba7b82-bf92-4b76-a812-685c12ecb21c\") " pod="kube-system/kube-proxy-2dkdg"
	Nov 21 14:40:32 newest-cni-696683 kubelet[1302]: I1121 14:40:32.865493    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98b995f3-7968-4e19-abc1-10772001bd6c-xtables-lock\") pod \"kindnet-m6v5n\" (UID: \"98b995f3-7968-4e19-abc1-10772001bd6c\") " pod="kube-system/kindnet-m6v5n"
	Nov 21 14:40:32 newest-cni-696683 kubelet[1302]: I1121 14:40:32.865516    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxwbs\" (UniqueName: \"kubernetes.io/projected/98b995f3-7968-4e19-abc1-10772001bd6c-kube-api-access-nxwbs\") pod \"kindnet-m6v5n\" (UID: \"98b995f3-7968-4e19-abc1-10772001bd6c\") " pod="kube-system/kindnet-m6v5n"
	Nov 21 14:40:32 newest-cni-696683 kubelet[1302]: I1121 14:40:32.865640    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnsdf\" (UniqueName: \"kubernetes.io/projected/13ba7b82-bf92-4b76-a812-685c12ecb21c-kube-api-access-gnsdf\") pod \"kube-proxy-2dkdg\" (UID: \"13ba7b82-bf92-4b76-a812-685c12ecb21c\") " pod="kube-system/kube-proxy-2dkdg"
	Nov 21 14:40:32 newest-cni-696683 kubelet[1302]: I1121 14:40:32.865678    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/98b995f3-7968-4e19-abc1-10772001bd6c-cni-cfg\") pod \"kindnet-m6v5n\" (UID: \"98b995f3-7968-4e19-abc1-10772001bd6c\") " pod="kube-system/kindnet-m6v5n"
	Nov 21 14:40:32 newest-cni-696683 kubelet[1302]: I1121 14:40:32.865706    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98b995f3-7968-4e19-abc1-10772001bd6c-lib-modules\") pod \"kindnet-m6v5n\" (UID: \"98b995f3-7968-4e19-abc1-10772001bd6c\") " pod="kube-system/kindnet-m6v5n"
	Nov 21 14:40:33 newest-cni-696683 kubelet[1302]: I1121 14:40:33.468388    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-m6v5n" podStartSLOduration=1.468366261 podStartE2EDuration="1.468366261s" podCreationTimestamp="2025-11-21 14:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:40:33.435629807 +0000 UTC m=+6.199263375" watchObservedRunningTime="2025-11-21 14:40:33.468366261 +0000 UTC m=+6.231999829"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-696683 -n newest-cni-696683
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-696683 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-ncl4f storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-696683 describe pod coredns-66bc5c9577-ncl4f storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-696683 describe pod coredns-66bc5c9577-ncl4f storage-provisioner: exit status 1 (57.090826ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-ncl4f" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-696683 describe pod coredns-66bc5c9577-ncl4f storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-696683 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-696683 --alsologtostderr -v=1: exit status 80 (1.644029338s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-696683 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:40:50.505951  282392 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:40:50.506232  282392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:40:50.506247  282392 out.go:374] Setting ErrFile to fd 2...
	I1121 14:40:50.506253  282392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:40:50.506545  282392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:40:50.506896  282392 out.go:368] Setting JSON to false
	I1121 14:40:50.506940  282392 mustload.go:66] Loading cluster: newest-cni-696683
	I1121 14:40:50.507389  282392 config.go:182] Loaded profile config "newest-cni-696683": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:50.507980  282392 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:50.532095  282392 host.go:66] Checking if "newest-cni-696683" exists ...
	I1121 14:40:50.532326  282392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:40:50.598649  282392 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-21 14:40:50.588122154 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:40:50.599200  282392 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-696683 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1121 14:40:50.600765  282392 out.go:179] * Pausing node newest-cni-696683 ... 
	I1121 14:40:50.601677  282392 host.go:66] Checking if "newest-cni-696683" exists ...
	I1121 14:40:50.602003  282392 ssh_runner.go:195] Run: systemctl --version
	I1121 14:40:50.602045  282392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:50.619998  282392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:50.713655  282392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:40:50.726068  282392 pause.go:52] kubelet running: true
	I1121 14:40:50.726134  282392 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:40:50.874681  282392 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:40:50.874773  282392 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:40:50.946770  282392 cri.go:89] found id: "525ca5ccf8737122b9a66a5b0feccd36cea2deec23560d29575393ada0330762"
	I1121 14:40:50.946829  282392 cri.go:89] found id: "e6409bf9f2c514d529c3416169d40e9f48edb2af837cd54c31176b7c2ee0a3d8"
	I1121 14:40:50.946836  282392 cri.go:89] found id: "15917aa53c8197587b8ebdb80d10a679b2a7abe6ff5a81b0d4f5a42900e02412"
	I1121 14:40:50.946841  282392 cri.go:89] found id: "76d7dc76ff36d3fb6387582649e9fc04ab3c1cbd059a19721997d005f5434abc"
	I1121 14:40:50.946845  282392 cri.go:89] found id: "ecf45bf1d37d86d0c9346ae4b4f597dfe7f80bbc47df49bb0994a548a8922b4b"
	I1121 14:40:50.946850  282392 cri.go:89] found id: "958b1593ef47f88f59d980553b03bdcf6b5f2c94efadd777421a6a497aa6ba37"
	I1121 14:40:50.946854  282392 cri.go:89] found id: ""
	I1121 14:40:50.946891  282392 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:40:50.959094  282392 retry.go:31] will retry after 315.48008ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:40:50Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:40:51.275621  282392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:40:51.288790  282392 pause.go:52] kubelet running: false
	I1121 14:40:51.288857  282392 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:40:51.427258  282392 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:40:51.427341  282392 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:40:51.493451  282392 cri.go:89] found id: "525ca5ccf8737122b9a66a5b0feccd36cea2deec23560d29575393ada0330762"
	I1121 14:40:51.493469  282392 cri.go:89] found id: "e6409bf9f2c514d529c3416169d40e9f48edb2af837cd54c31176b7c2ee0a3d8"
	I1121 14:40:51.493472  282392 cri.go:89] found id: "15917aa53c8197587b8ebdb80d10a679b2a7abe6ff5a81b0d4f5a42900e02412"
	I1121 14:40:51.493476  282392 cri.go:89] found id: "76d7dc76ff36d3fb6387582649e9fc04ab3c1cbd059a19721997d005f5434abc"
	I1121 14:40:51.493478  282392 cri.go:89] found id: "ecf45bf1d37d86d0c9346ae4b4f597dfe7f80bbc47df49bb0994a548a8922b4b"
	I1121 14:40:51.493482  282392 cri.go:89] found id: "958b1593ef47f88f59d980553b03bdcf6b5f2c94efadd777421a6a497aa6ba37"
	I1121 14:40:51.493491  282392 cri.go:89] found id: ""
	I1121 14:40:51.493523  282392 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:40:51.505506  282392 retry.go:31] will retry after 335.545378ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:40:51Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:40:51.842170  282392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:40:51.856359  282392 pause.go:52] kubelet running: false
	I1121 14:40:51.856407  282392 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:40:51.979048  282392 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:40:51.979107  282392 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:40:52.046619  282392 cri.go:89] found id: "525ca5ccf8737122b9a66a5b0feccd36cea2deec23560d29575393ada0330762"
	I1121 14:40:52.046646  282392 cri.go:89] found id: "e6409bf9f2c514d529c3416169d40e9f48edb2af837cd54c31176b7c2ee0a3d8"
	I1121 14:40:52.046652  282392 cri.go:89] found id: "15917aa53c8197587b8ebdb80d10a679b2a7abe6ff5a81b0d4f5a42900e02412"
	I1121 14:40:52.046656  282392 cri.go:89] found id: "76d7dc76ff36d3fb6387582649e9fc04ab3c1cbd059a19721997d005f5434abc"
	I1121 14:40:52.046662  282392 cri.go:89] found id: "ecf45bf1d37d86d0c9346ae4b4f597dfe7f80bbc47df49bb0994a548a8922b4b"
	I1121 14:40:52.046668  282392 cri.go:89] found id: "958b1593ef47f88f59d980553b03bdcf6b5f2c94efadd777421a6a497aa6ba37"
	I1121 14:40:52.046673  282392 cri.go:89] found id: ""
	I1121 14:40:52.046734  282392 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:40:52.060853  282392 out.go:203] 
	W1121 14:40:52.061901  282392 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:40:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:40:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 14:40:52.061922  282392 out.go:285] * 
	* 
	W1121 14:40:52.066314  282392 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 14:40:52.067381  282392 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-696683 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-696683
helpers_test.go:243: (dbg) docker inspect newest-cni-696683:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5aacf10261f216094844f8e03cf2bc73194284e3d15f0368fc3a8ff21809591d",
	        "Created": "2025-11-21T14:40:09.858539205Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 280415,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:40:39.963886418Z",
	            "FinishedAt": "2025-11-21T14:40:38.952434797Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/5aacf10261f216094844f8e03cf2bc73194284e3d15f0368fc3a8ff21809591d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5aacf10261f216094844f8e03cf2bc73194284e3d15f0368fc3a8ff21809591d/hostname",
	        "HostsPath": "/var/lib/docker/containers/5aacf10261f216094844f8e03cf2bc73194284e3d15f0368fc3a8ff21809591d/hosts",
	        "LogPath": "/var/lib/docker/containers/5aacf10261f216094844f8e03cf2bc73194284e3d15f0368fc3a8ff21809591d/5aacf10261f216094844f8e03cf2bc73194284e3d15f0368fc3a8ff21809591d-json.log",
	        "Name": "/newest-cni-696683",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-696683:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-696683",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5aacf10261f216094844f8e03cf2bc73194284e3d15f0368fc3a8ff21809591d",
	                "LowerDir": "/var/lib/docker/overlay2/655e2907b15a841ba8d7c09b0eecf0c4c7a490c173b62e8a174062781efe4d9f-init/diff:/var/lib/docker/overlay2/52a35d389e2d282a009a54c52e3f2c3f22a4d7ab5a2644a29d16127d21682576/diff",
	                "MergedDir": "/var/lib/docker/overlay2/655e2907b15a841ba8d7c09b0eecf0c4c7a490c173b62e8a174062781efe4d9f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/655e2907b15a841ba8d7c09b0eecf0c4c7a490c173b62e8a174062781efe4d9f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/655e2907b15a841ba8d7c09b0eecf0c4c7a490c173b62e8a174062781efe4d9f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-696683",
	                "Source": "/var/lib/docker/volumes/newest-cni-696683/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-696683",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-696683",
	                "name.minikube.sigs.k8s.io": "newest-cni-696683",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "07af311bb555603fa47ba27896c6060e64b510d8a78421146c568859c13cf876",
	            "SandboxKey": "/var/run/docker/netns/07af311bb555",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-696683": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3b7fce235b16a39fb4cd51190508048f90b9443938b78208046c510cbfbee936",
	                    "EndpointID": "0680bdf179228b30d99a20f9ad54df55fc62dd2f35a770ce8a30c584b0b497ca",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "76:2b:1e:e5:a7:9d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-696683",
	                        "5aacf10261f2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-696683 -n newest-cni-696683
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-696683 -n newest-cni-696683: exit status 2 (320.814055ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-696683 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p kubernetes-upgrade-214044 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-214044    │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ start   │ -p kubernetes-upgrade-214044 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-214044    │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:39 UTC │
	│ addons  │ enable metrics-server -p embed-certs-441390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ image   │ no-preload-589411 image list --format=json                                                                                                                                                                                                    │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:39 UTC │
	│ pause   │ -p no-preload-589411 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ stop    │ -p embed-certs-441390 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p cert-expiration-046125 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-046125       │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:40 UTC │
	│ delete  │ -p kubernetes-upgrade-214044                                                                                                                                                                                                                  │ kubernetes-upgrade-214044    │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:40 UTC │
	│ delete  │ -p disable-driver-mounts-708207                                                                                                                                                                                                               │ disable-driver-mounts-708207 │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p default-k8s-diff-port-859276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-859276 │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	│ delete  │ -p no-preload-589411                                                                                                                                                                                                                          │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ delete  │ -p cert-expiration-046125                                                                                                                                                                                                                     │ cert-expiration-046125       │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ delete  │ -p no-preload-589411                                                                                                                                                                                                                          │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p newest-cni-696683 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p auto-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-441390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p embed-certs-441390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ addons  │ enable metrics-server -p newest-cni-696683 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	│ stop    │ -p newest-cni-696683 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ addons  │ enable dashboard -p newest-cni-696683 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p newest-cni-696683 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ image   │ newest-cni-696683 image list --format=json                                                                                                                                                                                                    │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ pause   │ -p newest-cni-696683 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	│ image   │ embed-certs-441390 image list --format=json                                                                                                                                                                                                   │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ pause   │ -p embed-certs-441390 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:40:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:40:39.698658  280056 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:40:39.699061  280056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:40:39.699072  280056 out.go:374] Setting ErrFile to fd 2...
	I1121 14:40:39.699078  280056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:40:39.699382  280056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:40:39.700016  280056 out.go:368] Setting JSON to false
	I1121 14:40:39.701601  280056 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4989,"bootTime":1763731051,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:40:39.701719  280056 start.go:143] virtualization: kvm guest
	I1121 14:40:39.703395  280056 out.go:179] * [newest-cni-696683] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:40:39.705010  280056 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:40:39.705085  280056 notify.go:221] Checking for updates...
	I1121 14:40:39.709543  280056 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:40:39.710889  280056 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:40:39.711654  280056 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 14:40:39.712608  280056 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:40:39.202164  269911 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:39.202179  269911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:40:39.202225  269911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989875
	I1121 14:40:39.203262  269911 addons.go:239] Setting addon default-storageclass=true in "auto-989875"
	I1121 14:40:39.203302  269911 host.go:66] Checking if "auto-989875" exists ...
	I1121 14:40:39.203757  269911 cli_runner.go:164] Run: docker container inspect auto-989875 --format={{.State.Status}}
	I1121 14:40:39.231112  269911 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:39.231135  269911 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:40:39.231188  269911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989875
	I1121 14:40:39.231351  269911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/auto-989875/id_rsa Username:docker}
	I1121 14:40:39.253202  269911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/auto-989875/id_rsa Username:docker}
	I1121 14:40:39.264883  269911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:40:39.321852  269911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:40:39.368717  269911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:39.374920  269911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:39.469946  269911 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1121 14:40:39.473299  269911 node_ready.go:35] waiting up to 15m0s for node "auto-989875" to be "Ready" ...
	I1121 14:40:39.713942  269911 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:40:39.714030  280056 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:40:39.715503  280056 config.go:182] Loaded profile config "newest-cni-696683": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:39.715983  280056 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:40:39.743756  280056 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:40:39.743915  280056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:40:39.815430  280056 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:40:39.803326466 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:40:39.815546  280056 docker.go:319] overlay module found
	I1121 14:40:39.816656  280056 out.go:179] * Using the docker driver based on existing profile
	I1121 14:40:39.817754  280056 start.go:309] selected driver: docker
	I1121 14:40:39.817774  280056 start.go:930] validating driver "docker" against &{Name:newest-cni-696683 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:40:39.817892  280056 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:40:39.818542  280056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:40:39.891888  280056 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:40:39.880844572 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:40:39.892243  280056 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1121 14:40:39.892278  280056 cni.go:84] Creating CNI manager for ""
	I1121 14:40:39.892328  280056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:40:39.892373  280056 start.go:353] cluster config:
	{Name:newest-cni-696683 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:40:39.894009  280056 out.go:179] * Starting "newest-cni-696683" primary control-plane node in "newest-cni-696683" cluster
	I1121 14:40:39.894992  280056 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:40:39.895975  280056 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:40:39.896900  280056 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:40:39.896944  280056 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1121 14:40:39.896959  280056 cache.go:65] Caching tarball of preloaded images
	I1121 14:40:39.896995  280056 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:40:39.897060  280056 preload.go:238] Found /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1121 14:40:39.897075  280056 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 14:40:39.897184  280056 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/config.json ...
	I1121 14:40:39.918549  280056 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:40:39.918582  280056 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:40:39.918603  280056 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:40:39.918629  280056 start.go:360] acquireMachinesLock for newest-cni-696683: {Name:mk685873e16cf8d4315d67b3bf50f89f3c32618f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:40:39.918691  280056 start.go:364] duration metric: took 39.301µs to acquireMachinesLock for "newest-cni-696683"
	I1121 14:40:39.918713  280056 start.go:96] Skipping create...Using existing machine configuration
	I1121 14:40:39.918723  280056 fix.go:54] fixHost starting: 
	I1121 14:40:39.918941  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:39.939232  280056 fix.go:112] recreateIfNeeded on newest-cni-696683: state=Stopped err=<nil>
	W1121 14:40:39.939257  280056 fix.go:138] unexpected machine state, will restart: <nil>
	W1121 14:40:37.195055  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	W1121 14:40:39.196240  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	I1121 14:40:39.715037  269911 addons.go:530] duration metric: took 541.90535ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:40:39.974357  269911 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-989875" context rescaled to 1 replicas
	W1121 14:40:41.476592  269911 node_ready.go:57] node "auto-989875" has "Ready":"False" status (will retry)
	I1121 14:40:39.940709  280056 out.go:252] * Restarting existing docker container for "newest-cni-696683" ...
	I1121 14:40:39.940774  280056 cli_runner.go:164] Run: docker start newest-cni-696683
	I1121 14:40:40.204292  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:40.225047  280056 kic.go:430] container "newest-cni-696683" state is running.
	I1121 14:40:40.225352  280056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-696683
	I1121 14:40:40.245950  280056 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/config.json ...
	I1121 14:40:40.246193  280056 machine.go:94] provisionDockerMachine start ...
	I1121 14:40:40.246264  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:40.266155  280056 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:40.266469  280056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1121 14:40:40.266487  280056 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:40:40.267187  280056 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58902->127.0.0.1:33099: read: connection reset by peer
	I1121 14:40:43.397206  280056 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-696683
	
	I1121 14:40:43.397237  280056 ubuntu.go:182] provisioning hostname "newest-cni-696683"
	I1121 14:40:43.397300  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:43.416243  280056 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:43.416538  280056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1121 14:40:43.416568  280056 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-696683 && echo "newest-cni-696683" | sudo tee /etc/hostname
	I1121 14:40:43.552946  280056 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-696683
	
	I1121 14:40:43.553020  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:43.570469  280056 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:43.570726  280056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1121 14:40:43.570747  280056 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-696683' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-696683/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-696683' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:40:43.699459  280056 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:40:43.699487  280056 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11045/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11045/.minikube}
	I1121 14:40:43.699509  280056 ubuntu.go:190] setting up certificates
	I1121 14:40:43.699518  280056 provision.go:84] configureAuth start
	I1121 14:40:43.699572  280056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-696683
	I1121 14:40:43.716911  280056 provision.go:143] copyHostCerts
	I1121 14:40:43.716971  280056 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem, removing ...
	I1121 14:40:43.716988  280056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem
	I1121 14:40:43.717063  280056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem (1078 bytes)
	I1121 14:40:43.717170  280056 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem, removing ...
	I1121 14:40:43.717182  280056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem
	I1121 14:40:43.717225  280056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem (1123 bytes)
	I1121 14:40:43.717301  280056 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem, removing ...
	I1121 14:40:43.717311  280056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem
	I1121 14:40:43.717354  280056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem (1679 bytes)
	I1121 14:40:43.717424  280056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem org=jenkins.newest-cni-696683 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-696683]
	I1121 14:40:43.898083  280056 provision.go:177] copyRemoteCerts
	I1121 14:40:43.898146  280056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:40:43.898203  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:43.915505  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:44.009431  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:40:44.026983  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1121 14:40:44.043724  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 14:40:44.059839  280056 provision.go:87] duration metric: took 360.308976ms to configureAuth
	I1121 14:40:44.059858  280056 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:40:44.060029  280056 config.go:182] Loaded profile config "newest-cni-696683": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:44.060145  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:44.078061  280056 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:44.078262  280056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1121 14:40:44.078281  280056 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:40:44.359271  280056 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:40:44.359300  280056 machine.go:97] duration metric: took 4.113090842s to provisionDockerMachine
	I1121 14:40:44.359333  280056 start.go:293] postStartSetup for "newest-cni-696683" (driver="docker")
	I1121 14:40:44.359359  280056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:40:44.359441  280056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:40:44.359503  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:44.377727  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:44.471221  280056 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:40:44.474531  280056 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:40:44.474582  280056 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:40:44.474595  280056 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/addons for local assets ...
	I1121 14:40:44.474657  280056 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/files for local assets ...
	I1121 14:40:44.474769  280056 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem -> 145422.pem in /etc/ssl/certs
	I1121 14:40:44.474885  280056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:40:44.482193  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:40:44.498766  280056 start.go:296] duration metric: took 139.419384ms for postStartSetup
	I1121 14:40:44.498841  280056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:40:44.498885  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:44.516283  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:44.607254  280056 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:40:44.611996  280056 fix.go:56] duration metric: took 4.693269423s for fixHost
	I1121 14:40:44.612015  280056 start.go:83] releasing machines lock for "newest-cni-696683", held for 4.693312828s
	I1121 14:40:44.612074  280056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-696683
	I1121 14:40:44.629484  280056 ssh_runner.go:195] Run: cat /version.json
	I1121 14:40:44.629530  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:44.629596  280056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:40:44.629660  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:44.646651  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:44.647257  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	W1121 14:40:41.693191  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	W1121 14:40:43.693977  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	I1121 14:40:44.789269  280056 ssh_runner.go:195] Run: systemctl --version
	I1121 14:40:44.795157  280056 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:40:44.829469  280056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:40:44.833726  280056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:40:44.833770  280056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:40:44.841442  280056 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 14:40:44.841462  280056 start.go:496] detecting cgroup driver to use...
	I1121 14:40:44.841500  280056 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:40:44.841546  280056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:40:44.855704  280056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:40:44.867322  280056 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:40:44.867355  280056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:40:44.880286  280056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:40:44.891778  280056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:40:44.971173  280056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:40:45.053340  280056 docker.go:234] disabling docker service ...
	I1121 14:40:45.053430  280056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:40:45.066798  280056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:40:45.078751  280056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:40:45.158914  280056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:40:45.236074  280056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:40:45.247464  280056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:40:45.260830  280056 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 14:40:45.260881  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.268922  280056 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1121 14:40:45.268972  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.276871  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.284760  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.292909  280056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:40:45.300239  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.308497  280056 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.316091  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.324294  280056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:40:45.330973  280056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:40:45.337651  280056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:40:45.412162  280056 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:40:45.548953  280056 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:40:45.549022  280056 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:40:45.552808  280056 start.go:564] Will wait 60s for crictl version
	I1121 14:40:45.552866  280056 ssh_runner.go:195] Run: which crictl
	I1121 14:40:45.556653  280056 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:40:45.580611  280056 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:40:45.580686  280056 ssh_runner.go:195] Run: crio --version
	I1121 14:40:45.607820  280056 ssh_runner.go:195] Run: crio --version
	I1121 14:40:45.636081  280056 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 14:40:45.637049  280056 cli_runner.go:164] Run: docker network inspect newest-cni-696683 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:40:45.652698  280056 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:40:45.656512  280056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:40:45.667700  280056 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1121 14:40:45.668667  280056 kubeadm.go:884] updating cluster {Name:newest-cni-696683 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:40:45.668785  280056 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:40:45.668828  280056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:40:45.700321  280056 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:40:45.700343  280056 crio.go:433] Images already preloaded, skipping extraction
	I1121 14:40:45.700378  280056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:40:45.724113  280056 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:40:45.724131  280056 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:40:45.724139  280056 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1121 14:40:45.724223  280056 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-696683 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:40:45.724281  280056 ssh_runner.go:195] Run: crio config
	I1121 14:40:45.769317  280056 cni.go:84] Creating CNI manager for ""
	I1121 14:40:45.769335  280056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:40:45.769351  280056 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1121 14:40:45.769371  280056 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-696683 NodeName:newest-cni-696683 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:40:45.769497  280056 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-696683"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:40:45.769548  280056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:40:45.777468  280056 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:40:45.777525  280056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:40:45.785019  280056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1121 14:40:45.796834  280056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:40:45.808433  280056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1121 14:40:45.820149  280056 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:40:45.823519  280056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:40:45.832775  280056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:40:45.917710  280056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:40:45.942977  280056 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683 for IP: 192.168.85.2
	I1121 14:40:45.942996  280056 certs.go:195] generating shared ca certs ...
	I1121 14:40:45.943016  280056 certs.go:227] acquiring lock for ca certs: {Name:mkde3a7d6f17b238f06eab3a140993599f1b4367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:45.943143  280056 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key
	I1121 14:40:45.943197  280056 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key
	I1121 14:40:45.943209  280056 certs.go:257] generating profile certs ...
	I1121 14:40:45.943287  280056 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/client.key
	I1121 14:40:45.943338  280056 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.key.78303e51
	I1121 14:40:45.943372  280056 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/proxy-client.key
	I1121 14:40:45.943471  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem (1338 bytes)
	W1121 14:40:45.943505  280056 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542_empty.pem, impossibly tiny 0 bytes
	I1121 14:40:45.943516  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:40:45.943543  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:40:45.943582  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:40:45.943611  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem (1679 bytes)
	I1121 14:40:45.943651  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:40:45.944261  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:40:45.962656  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:40:45.981773  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:40:46.000183  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1121 14:40:46.026245  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 14:40:46.046663  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:40:46.062648  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:40:46.079837  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:40:46.096146  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:40:46.112465  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem --> /usr/share/ca-certificates/14542.pem (1338 bytes)
	I1121 14:40:46.128984  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /usr/share/ca-certificates/145422.pem (1708 bytes)
	I1121 14:40:46.145773  280056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:40:46.157581  280056 ssh_runner.go:195] Run: openssl version
	I1121 14:40:46.163196  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145422.pem && ln -fs /usr/share/ca-certificates/145422.pem /etc/ssl/certs/145422.pem"
	I1121 14:40:46.171390  280056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145422.pem
	I1121 14:40:46.174733  280056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145422.pem
	I1121 14:40:46.174777  280056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145422.pem
	I1121 14:40:46.211212  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145422.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:40:46.218830  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:40:46.226780  280056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:46.230239  280056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:46.230281  280056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:46.264064  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:40:46.271501  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14542.pem && ln -fs /usr/share/ca-certificates/14542.pem /etc/ssl/certs/14542.pem"
	I1121 14:40:46.279591  280056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14542.pem
	I1121 14:40:46.282952  280056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14542.pem
	I1121 14:40:46.282984  280056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14542.pem
	I1121 14:40:46.316214  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14542.pem /etc/ssl/certs/51391683.0"
	I1121 14:40:46.323317  280056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:40:46.327082  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 14:40:46.362145  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 14:40:46.397494  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 14:40:46.432068  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 14:40:46.476192  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 14:40:46.524752  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 14:40:46.572490  280056 kubeadm.go:401] StartCluster: {Name:newest-cni-696683 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:40:46.572631  280056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:40:46.572688  280056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:40:46.608977  280056 cri.go:89] found id: "15917aa53c8197587b8ebdb80d10a679b2a7abe6ff5a81b0d4f5a42900e02412"
	I1121 14:40:46.609002  280056 cri.go:89] found id: "76d7dc76ff36d3fb6387582649e9fc04ab3c1cbd059a19721997d005f5434abc"
	I1121 14:40:46.609007  280056 cri.go:89] found id: "ecf45bf1d37d86d0c9346ae4b4f597dfe7f80bbc47df49bb0994a548a8922b4b"
	I1121 14:40:46.609011  280056 cri.go:89] found id: "958b1593ef47f88f59d980553b03bdcf6b5f2c94efadd777421a6a497aa6ba37"
	I1121 14:40:46.609015  280056 cri.go:89] found id: ""
	I1121 14:40:46.609064  280056 ssh_runner.go:195] Run: sudo runc list -f json
	W1121 14:40:46.623457  280056 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:40:46Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:40:46.623543  280056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:40:46.631552  280056 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 14:40:46.631604  280056 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 14:40:46.631642  280056 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 14:40:46.639112  280056 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:40:46.640392  280056 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-696683" does not appear in /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:40:46.641294  280056 kubeconfig.go:62] /home/jenkins/minikube-integration/21847-11045/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-696683" cluster setting kubeconfig missing "newest-cni-696683" context setting]
	I1121 14:40:46.642656  280056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:46.644901  280056 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 14:40:46.652585  280056 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1121 14:40:46.652614  280056 kubeadm.go:602] duration metric: took 21.003413ms to restartPrimaryControlPlane
	I1121 14:40:46.652622  280056 kubeadm.go:403] duration metric: took 80.144736ms to StartCluster
	I1121 14:40:46.652645  280056 settings.go:142] acquiring lock: {Name:mkb207cf001a407898b2dbfd9fb9b3881f173a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:46.652695  280056 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:40:46.655150  280056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:46.655378  280056 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:40:46.655488  280056 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:40:46.655593  280056 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-696683"
	I1121 14:40:46.655610  280056 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-696683"
	W1121 14:40:46.655619  280056 addons.go:248] addon storage-provisioner should already be in state true
	I1121 14:40:46.655632  280056 config.go:182] Loaded profile config "newest-cni-696683": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:46.655645  280056 host.go:66] Checking if "newest-cni-696683" exists ...
	I1121 14:40:46.655665  280056 addons.go:70] Setting dashboard=true in profile "newest-cni-696683"
	I1121 14:40:46.655693  280056 addons.go:239] Setting addon dashboard=true in "newest-cni-696683"
	W1121 14:40:46.655703  280056 addons.go:248] addon dashboard should already be in state true
	I1121 14:40:46.655689  280056 addons.go:70] Setting default-storageclass=true in profile "newest-cni-696683"
	I1121 14:40:46.655739  280056 host.go:66] Checking if "newest-cni-696683" exists ...
	I1121 14:40:46.655746  280056 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-696683"
	I1121 14:40:46.656081  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:46.656134  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:46.656263  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:46.659699  280056 out.go:179] * Verifying Kubernetes components...
	I1121 14:40:46.660933  280056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:40:46.682004  280056 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1121 14:40:46.682004  280056 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:40:46.682500  280056 addons.go:239] Setting addon default-storageclass=true in "newest-cni-696683"
	W1121 14:40:46.682522  280056 addons.go:248] addon default-storageclass should already be in state true
	I1121 14:40:46.682547  280056 host.go:66] Checking if "newest-cni-696683" exists ...
	I1121 14:40:46.683001  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:46.686740  280056 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:46.686759  280056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:40:46.686806  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:46.688141  280056 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1121 14:40:43.976021  269911 node_ready.go:57] node "auto-989875" has "Ready":"False" status (will retry)
	W1121 14:40:45.976308  269911 node_ready.go:57] node "auto-989875" has "Ready":"False" status (will retry)
	I1121 14:40:46.689188  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1121 14:40:46.689209  280056 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1121 14:40:46.689271  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:46.713217  280056 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:46.713242  280056 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:40:46.713295  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:46.720516  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:46.724111  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:46.739551  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:46.802053  280056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:40:46.815536  280056 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:40:46.815609  280056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:40:46.828043  280056 api_server.go:72] duration metric: took 172.633997ms to wait for apiserver process to appear ...
	I1121 14:40:46.828064  280056 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:40:46.828080  280056 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:40:46.838809  280056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:46.840678  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1121 14:40:46.840695  280056 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1121 14:40:46.852409  280056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:46.856391  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1121 14:40:46.856410  280056 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1121 14:40:46.871966  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1121 14:40:46.871983  280056 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1121 14:40:46.887375  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1121 14:40:46.887424  280056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1121 14:40:46.902141  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1121 14:40:46.902162  280056 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1121 14:40:46.917178  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1121 14:40:46.917195  280056 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1121 14:40:46.930976  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1121 14:40:46.930993  280056 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1121 14:40:46.944066  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1121 14:40:46.944083  280056 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1121 14:40:46.956286  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1121 14:40:46.956305  280056 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1121 14:40:46.968997  280056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1121 14:40:48.462794  280056 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1121 14:40:48.462825  280056 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1121 14:40:48.462841  280056 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:40:48.469024  280056 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1121 14:40:48.469051  280056 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1121 14:40:48.829162  280056 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:40:48.834337  280056 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 14:40:48.834367  280056 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 14:40:48.963574  280056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.124616797s)
	I1121 14:40:48.963650  280056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.111184837s)
	I1121 14:40:48.963723  280056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.994699389s)
	I1121 14:40:48.965217  280056 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-696683 addons enable metrics-server
	
	I1121 14:40:48.973715  280056 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1121 14:40:48.974963  280056 addons.go:530] duration metric: took 2.319478862s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1121 14:40:49.328711  280056 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:40:49.333400  280056 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 14:40:49.333420  280056 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 14:40:49.829132  280056 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:40:49.833697  280056 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1121 14:40:49.834649  280056 api_server.go:141] control plane version: v1.34.1
	I1121 14:40:49.834670  280056 api_server.go:131] duration metric: took 3.006599871s to wait for apiserver health ...
	I1121 14:40:49.834678  280056 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:40:49.838227  280056 system_pods.go:59] 8 kube-system pods found
	I1121 14:40:49.838263  280056 system_pods.go:61] "coredns-66bc5c9577-ncl4f" [93a097a2-31da-4456-8435-e1a976f3d7f9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1121 14:40:49.838273  280056 system_pods.go:61] "etcd-newest-cni-696683" [113e31f1-f22b-4ed8-adcb-8c12d55e1f4c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 14:40:49.838286  280056 system_pods.go:61] "kindnet-m6v5n" [98b995f3-7968-4e19-abc1-10772001bd6c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1121 14:40:49.838301  280056 system_pods.go:61] "kube-apiserver-newest-cni-696683" [a046bba0-991c-4291-b89a-a0e64e3686b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 14:40:49.838311  280056 system_pods.go:61] "kube-controller-manager-newest-cni-696683" [dd3689f1-9ccf-4bca-8147-1779d92c3598] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 14:40:49.838318  280056 system_pods.go:61] "kube-proxy-2dkdg" [13ba7b82-bf92-4b76-a812-685c12ecb21c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1121 14:40:49.838331  280056 system_pods.go:61] "kube-scheduler-newest-cni-696683" [57fd312e-bc77-4ecb-9f3b-caa50247e033] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 14:40:49.838337  280056 system_pods.go:61] "storage-provisioner" [3cf44ed4-4cd8-4655-aef5-38415eb66de4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1121 14:40:49.838351  280056 system_pods.go:74] duration metric: took 3.666864ms to wait for pod list to return data ...
	I1121 14:40:49.838364  280056 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:40:49.840748  280056 default_sa.go:45] found service account: "default"
	I1121 14:40:49.840769  280056 default_sa.go:55] duration metric: took 2.395802ms for default service account to be created ...
	I1121 14:40:49.840783  280056 kubeadm.go:587] duration metric: took 3.185377365s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1121 14:40:49.840808  280056 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:40:49.842953  280056 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:40:49.842978  280056 node_conditions.go:123] node cpu capacity is 8
	I1121 14:40:49.842993  280056 node_conditions.go:105] duration metric: took 2.175119ms to run NodePressure ...
	I1121 14:40:49.843009  280056 start.go:242] waiting for startup goroutines ...
	I1121 14:40:49.843022  280056 start.go:247] waiting for cluster config update ...
	I1121 14:40:49.843039  280056 start.go:256] writing updated cluster config ...
	I1121 14:40:49.843325  280056 ssh_runner.go:195] Run: rm -f paused
	I1121 14:40:49.887622  280056 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:40:49.890008  280056 out.go:179] * Done! kubectl is now configured to use "newest-cni-696683" cluster and "default" namespace by default
	W1121 14:40:45.694185  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	W1121 14:40:47.694654  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	W1121 14:40:50.194324  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	W1121 14:40:48.477939  269911 node_ready.go:57] node "auto-989875" has "Ready":"False" status (will retry)
	I1121 14:40:50.487901  269911 node_ready.go:49] node "auto-989875" is "Ready"
	I1121 14:40:50.487937  269911 node_ready.go:38] duration metric: took 11.014560663s for node "auto-989875" to be "Ready" ...
	I1121 14:40:50.487951  269911 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:40:50.488000  269911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:40:50.506437  269911 api_server.go:72] duration metric: took 11.333890908s to wait for apiserver process to appear ...
	I1121 14:40:50.506462  269911 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:40:50.506481  269911 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1121 14:40:50.511381  269911 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1121 14:40:50.512348  269911 api_server.go:141] control plane version: v1.34.1
	I1121 14:40:50.512374  269911 api_server.go:131] duration metric: took 5.904455ms to wait for apiserver health ...
	I1121 14:40:50.512385  269911 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:40:50.516900  269911 system_pods.go:59] 8 kube-system pods found
	I1121 14:40:50.516933  269911 system_pods.go:61] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:50.516942  269911 system_pods.go:61] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:50.516954  269911 system_pods.go:61] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:50.516961  269911 system_pods.go:61] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:50.516969  269911 system_pods.go:61] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:50.516975  269911 system_pods.go:61] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:50.516983  269911 system_pods.go:61] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:50.516988  269911 system_pods.go:61] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Pending
	I1121 14:40:50.516995  269911 system_pods.go:74] duration metric: took 4.603561ms to wait for pod list to return data ...
	I1121 14:40:50.517018  269911 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:40:50.519971  269911 default_sa.go:45] found service account: "default"
	I1121 14:40:50.519990  269911 default_sa.go:55] duration metric: took 2.962898ms for default service account to be created ...
	I1121 14:40:50.520000  269911 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:40:50.523136  269911 system_pods.go:86] 8 kube-system pods found
	I1121 14:40:50.523178  269911 system_pods.go:89] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:50.523193  269911 system_pods.go:89] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:50.523202  269911 system_pods.go:89] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:50.523207  269911 system_pods.go:89] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:50.523212  269911 system_pods.go:89] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:50.523218  269911 system_pods.go:89] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:50.523222  269911 system_pods.go:89] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:50.523233  269911 system_pods.go:89] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Pending
	I1121 14:40:50.523254  269911 retry.go:31] will retry after 276.59635ms: missing components: kube-dns
	I1121 14:40:50.803782  269911 system_pods.go:86] 8 kube-system pods found
	I1121 14:40:50.803812  269911 system_pods.go:89] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:50.803820  269911 system_pods.go:89] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:50.803826  269911 system_pods.go:89] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:50.803830  269911 system_pods.go:89] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:50.803843  269911 system_pods.go:89] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:50.803847  269911 system_pods.go:89] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:50.803850  269911 system_pods.go:89] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:50.803854  269911 system_pods.go:89] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:40:50.803868  269911 retry.go:31] will retry after 254.453611ms: missing components: kube-dns
	I1121 14:40:51.063022  269911 system_pods.go:86] 8 kube-system pods found
	I1121 14:40:51.063048  269911 system_pods.go:89] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:51.063054  269911 system_pods.go:89] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:51.063060  269911 system_pods.go:89] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:51.063064  269911 system_pods.go:89] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:51.063070  269911 system_pods.go:89] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:51.063073  269911 system_pods.go:89] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:51.063076  269911 system_pods.go:89] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:51.063080  269911 system_pods.go:89] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:40:51.063093  269911 retry.go:31] will retry after 307.771212ms: missing components: kube-dns
	I1121 14:40:51.375222  269911 system_pods.go:86] 8 kube-system pods found
	I1121 14:40:51.375255  269911 system_pods.go:89] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:51.375268  269911 system_pods.go:89] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:51.375276  269911 system_pods.go:89] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:51.375282  269911 system_pods.go:89] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:51.375288  269911 system_pods.go:89] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:51.375299  269911 system_pods.go:89] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:51.375304  269911 system_pods.go:89] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:51.375315  269911 system_pods.go:89] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:40:51.375332  269911 retry.go:31] will retry after 408.234241ms: missing components: kube-dns
	I1121 14:40:51.790035  269911 system_pods.go:86] 8 kube-system pods found
	I1121 14:40:51.790067  269911 system_pods.go:89] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Running
	I1121 14:40:51.790076  269911 system_pods.go:89] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:51.790082  269911 system_pods.go:89] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:51.790088  269911 system_pods.go:89] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:51.790095  269911 system_pods.go:89] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:51.790101  269911 system_pods.go:89] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:51.790106  269911 system_pods.go:89] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:51.790111  269911 system_pods.go:89] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Running
	I1121 14:40:51.790124  269911 system_pods.go:126] duration metric: took 1.270114943s to wait for k8s-apps to be running ...
	I1121 14:40:51.790137  269911 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:40:51.790190  269911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:40:51.806346  269911 system_svc.go:56] duration metric: took 16.201575ms WaitForService to wait for kubelet
	I1121 14:40:51.806377  269911 kubeadm.go:587] duration metric: took 12.633833991s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:40:51.806402  269911 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:40:51.808958  269911 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:40:51.808980  269911 node_conditions.go:123] node cpu capacity is 8
	I1121 14:40:51.808992  269911 node_conditions.go:105] duration metric: took 2.584392ms to run NodePressure ...
	I1121 14:40:51.809003  269911 start.go:242] waiting for startup goroutines ...
	I1121 14:40:51.809009  269911 start.go:247] waiting for cluster config update ...
	I1121 14:40:51.809019  269911 start.go:256] writing updated cluster config ...
	I1121 14:40:51.809271  269911 ssh_runner.go:195] Run: rm -f paused
	I1121 14:40:51.812826  269911 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:40:51.816346  269911 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r6m4z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.820311  269911 pod_ready.go:94] pod "coredns-66bc5c9577-r6m4z" is "Ready"
	I1121 14:40:51.820332  269911 pod_ready.go:86] duration metric: took 3.96803ms for pod "coredns-66bc5c9577-r6m4z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.822259  269911 pod_ready.go:83] waiting for pod "etcd-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.826005  269911 pod_ready.go:94] pod "etcd-auto-989875" is "Ready"
	I1121 14:40:51.826024  269911 pod_ready.go:86] duration metric: took 3.74738ms for pod "etcd-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.827872  269911 pod_ready.go:83] waiting for pod "kube-apiserver-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.831284  269911 pod_ready.go:94] pod "kube-apiserver-auto-989875" is "Ready"
	I1121 14:40:51.831303  269911 pod_ready.go:86] duration metric: took 3.411512ms for pod "kube-apiserver-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.833002  269911 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> CRI-O <==
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.313230292Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.316060617Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a7da7364-2ae0-49d4-ab49-01fe8f125177 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.3167441Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=efb40c11-6d69-4fe0-8e74-27c0c88d66e9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.317403465Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.318095844Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.3182654Z" level=info msg="Ran pod sandbox b559d46ec55a8b696aaaea12a4865670e6784eac3e85e212f4cda273037d763f with infra container: kube-system/kube-proxy-2dkdg/POD" id=a7da7364-2ae0-49d4-ab49-01fe8f125177 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.318954176Z" level=info msg="Ran pod sandbox 07ccd1b0e4b0b72c7b0f5c73817a0f74e3e0602b5c90de1ab77b00fbbf9e0b23 with infra container: kube-system/kindnet-m6v5n/POD" id=efb40c11-6d69-4fe0-8e74-27c0c88d66e9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.319153696Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=c496c6e9-0806-4274-9fbb-f8b2526d3c09 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.321632025Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=1f78551a-e495-4fca-82f5-90288c5f0064 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.321669382Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=9564d893-05a0-45e3-9ad2-55c34d79ae0c name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.323089677Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=271fe86c-08ac-41bd-b8d2-5132319d40a7 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.323141784Z" level=info msg="Creating container: kube-system/kube-proxy-2dkdg/kube-proxy" id=6a60a860-097d-4855-ae95-89f13a5d02f3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.323263086Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.324044509Z" level=info msg="Creating container: kube-system/kindnet-m6v5n/kindnet-cni" id=b832a5f5-9f3c-4ffb-a745-e8ff753101a2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.324151139Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.327247867Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.327734214Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.328505544Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.328947408Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.357110495Z" level=info msg="Created container 525ca5ccf8737122b9a66a5b0feccd36cea2deec23560d29575393ada0330762: kube-system/kindnet-m6v5n/kindnet-cni" id=b832a5f5-9f3c-4ffb-a745-e8ff753101a2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.357608451Z" level=info msg="Starting container: 525ca5ccf8737122b9a66a5b0feccd36cea2deec23560d29575393ada0330762" id=844b29f1-18d5-4ac7-a0bd-d80ea5460d84 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.358619073Z" level=info msg="Created container e6409bf9f2c514d529c3416169d40e9f48edb2af837cd54c31176b7c2ee0a3d8: kube-system/kube-proxy-2dkdg/kube-proxy" id=6a60a860-097d-4855-ae95-89f13a5d02f3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.35906394Z" level=info msg="Starting container: e6409bf9f2c514d529c3416169d40e9f48edb2af837cd54c31176b7c2ee0a3d8" id=126768cc-2c4e-467f-95b3-44dfd9144bfc name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.359253187Z" level=info msg="Started container" PID=1058 containerID=525ca5ccf8737122b9a66a5b0feccd36cea2deec23560d29575393ada0330762 description=kube-system/kindnet-m6v5n/kindnet-cni id=844b29f1-18d5-4ac7-a0bd-d80ea5460d84 name=/runtime.v1.RuntimeService/StartContainer sandboxID=07ccd1b0e4b0b72c7b0f5c73817a0f74e3e0602b5c90de1ab77b00fbbf9e0b23
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.361438594Z" level=info msg="Started container" PID=1059 containerID=e6409bf9f2c514d529c3416169d40e9f48edb2af837cd54c31176b7c2ee0a3d8 description=kube-system/kube-proxy-2dkdg/kube-proxy id=126768cc-2c4e-467f-95b3-44dfd9144bfc name=/runtime.v1.RuntimeService/StartContainer sandboxID=b559d46ec55a8b696aaaea12a4865670e6784eac3e85e212f4cda273037d763f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	525ca5ccf8737       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   3 seconds ago       Running             kindnet-cni               1                   07ccd1b0e4b0b       kindnet-m6v5n                               kube-system
	e6409bf9f2c51       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   3 seconds ago       Running             kube-proxy                1                   b559d46ec55a8       kube-proxy-2dkdg                            kube-system
	15917aa53c819       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   6 seconds ago       Running             kube-apiserver            1                   6cebd072730a9       kube-apiserver-newest-cni-696683            kube-system
	76d7dc76ff36d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   6 seconds ago       Running             kube-controller-manager   1                   c9e04a0694739       kube-controller-manager-newest-cni-696683   kube-system
	ecf45bf1d37d8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   6 seconds ago       Running             etcd                      1                   5ad4a662d4f2b       etcd-newest-cni-696683                      kube-system
	958b1593ef47f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   6 seconds ago       Running             kube-scheduler            1                   68afd87728861       kube-scheduler-newest-cni-696683            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-696683
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-696683
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=newest-cni-696683
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_40_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:40:24 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-696683
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:40:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:40:48 +0000   Fri, 21 Nov 2025 14:40:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:40:48 +0000   Fri, 21 Nov 2025 14:40:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:40:48 +0000   Fri, 21 Nov 2025 14:40:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 21 Nov 2025 14:40:48 +0000   Fri, 21 Nov 2025 14:40:23 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-696683
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                eb56864d-718a-4ff0-98f9-3e18a790b305
	  Boot ID:                    4deed74e-d1ae-403f-b303-7338c681df31
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-696683                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         26s
	  kube-system                 kindnet-m6v5n                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      21s
	  kube-system                 kube-apiserver-newest-cni-696683             250m (3%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-controller-manager-newest-cni-696683    200m (2%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-proxy-2dkdg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-scheduler-newest-cni-696683             100m (1%)     0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  31s (x8 over 31s)  kubelet          Node newest-cni-696683 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s (x8 over 31s)  kubelet          Node newest-cni-696683 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s (x8 over 31s)  kubelet          Node newest-cni-696683 status is now: NodeHasSufficientPID
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s                kubelet          Node newest-cni-696683 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s                kubelet          Node newest-cni-696683 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s                kubelet          Node newest-cni-696683 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22s                node-controller  Node newest-cni-696683 event: Registered Node newest-cni-696683 in Controller
	  Normal  RegisteredNode           2s                 node-controller  Node newest-cni-696683 event: Registered Node newest-cni-696683 in Controller
	
	
	==> dmesg <==
	[  +0.087005] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024870] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.407014] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 13:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.052324] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023964] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +2.047715] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +4.031557] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +8.511113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[ +16.382337] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[Nov21 13:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	
	
	==> etcd [ecf45bf1d37d86d0c9346ae4b4f597dfe7f80bbc47df49bb0994a548a8922b4b] <==
	{"level":"warn","ts":"2025-11-21T14:40:47.871970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.877874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.885469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.893888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.901044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.907034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.914422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.920955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.927085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.933543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.940375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.946872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.953216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.959397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.965903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.978257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.984726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.997300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:48.004619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:48.010925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:48.016872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:48.038808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:48.045074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:48.051137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:48.097486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59068","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:40:53 up  1:23,  0 user,  load average: 4.40, 3.04, 1.94
	Linux newest-cni-696683 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [525ca5ccf8737122b9a66a5b0feccd36cea2deec23560d29575393ada0330762] <==
	I1121 14:40:49.507935       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:40:49.508187       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 14:40:49.508315       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:40:49.508337       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:40:49.508365       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:40:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:40:49.710116       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:40:49.710144       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:40:49.710159       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:40:49.710364       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [15917aa53c8197587b8ebdb80d10a679b2a7abe6ff5a81b0d4f5a42900e02412] <==
	I1121 14:40:48.539611       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1121 14:40:48.539548       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1121 14:40:48.540491       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 14:40:48.539633       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1121 14:40:48.540624       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1121 14:40:48.540639       1 aggregator.go:171] initial CRD sync complete...
	I1121 14:40:48.540655       1 autoregister_controller.go:144] Starting autoregister controller
	I1121 14:40:48.540662       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:40:48.540669       1 cache.go:39] Caches are synced for autoregister controller
	E1121 14:40:48.544876       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1121 14:40:48.547005       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 14:40:48.571191       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:40:48.769079       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 14:40:48.798248       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:40:48.823315       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:40:48.830250       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:40:48.836577       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:40:48.864469       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.13.24"}
	I1121 14:40:48.873605       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.97.77"}
	I1121 14:40:49.442337       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:40:51.874261       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:40:52.225338       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:40:52.225338       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:40:52.423407       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:40:52.423407       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [76d7dc76ff36d3fb6387582649e9fc04ab3c1cbd059a19721997d005f5434abc] <==
	I1121 14:40:51.836043       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 14:40:51.836051       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 14:40:51.841691       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 14:40:51.859197       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1121 14:40:51.862784       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 14:40:51.870157       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1121 14:40:51.870258       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 14:40:51.871306       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 14:40:51.871330       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1121 14:40:51.871343       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 14:40:51.871399       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 14:40:51.871446       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 14:40:51.871487       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:40:51.874257       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 14:40:51.875495       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:40:51.876574       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1121 14:40:51.876900       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 14:40:51.878939       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 14:40:51.881342       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 14:40:51.883247       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 14:40:51.884270       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:40:51.885414       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 14:40:51.887396       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 14:40:51.887645       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:40:51.889398       1 shared_informer.go:356] "Caches are synced" controller="GC"
	
	
	==> kube-proxy [e6409bf9f2c514d529c3416169d40e9f48edb2af837cd54c31176b7c2ee0a3d8] <==
	I1121 14:40:49.392694       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:40:49.472235       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:40:49.572474       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:40:49.572503       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 14:40:49.572592       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:40:49.593536       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:40:49.593622       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:40:49.599670       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:40:49.599950       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:40:49.599981       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:40:49.601486       1 config.go:200] "Starting service config controller"
	I1121 14:40:49.601519       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:40:49.601738       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:40:49.601769       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:40:49.601841       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:40:49.601851       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:40:49.601748       1 config.go:309] "Starting node config controller"
	I1121 14:40:49.601889       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:40:49.601900       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:40:49.702345       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:40:49.702501       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:40:49.702523       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [958b1593ef47f88f59d980553b03bdcf6b5f2c94efadd777421a6a497aa6ba37] <==
	I1121 14:40:47.263621       1 serving.go:386] Generated self-signed cert in-memory
	W1121 14:40:48.464162       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1121 14:40:48.464281       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1121 14:40:48.464303       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1121 14:40:48.464314       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1121 14:40:48.495394       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 14:40:48.495429       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:40:48.497322       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:40:48.497357       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:40:48.497610       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 14:40:48.497970       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 14:40:48.598503       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: E1121 14:40:48.053006     683 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-696683\" not found" node="newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: I1121 14:40:48.511642     683 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: E1121 14:40:48.521193     683 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-696683\" already exists" pod="kube-system/kube-apiserver-newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: I1121 14:40:48.521233     683 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: E1121 14:40:48.527351     683 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-696683\" already exists" pod="kube-system/kube-controller-manager-newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: I1121 14:40:48.527383     683 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: E1121 14:40:48.531113     683 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-696683\" already exists" pod="kube-system/kube-scheduler-newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: I1121 14:40:48.531140     683 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: E1121 14:40:48.536015     683 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-696683\" already exists" pod="kube-system/etcd-newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: I1121 14:40:48.573755     683 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: I1121 14:40:48.573855     683 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: I1121 14:40:48.573894     683 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: I1121 14:40:48.574812     683 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 21 14:40:49 newest-cni-696683 kubelet[683]: I1121 14:40:49.005762     683 apiserver.go:52] "Watching apiserver"
	Nov 21 14:40:49 newest-cni-696683 kubelet[683]: I1121 14:40:49.009803     683 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 21 14:40:49 newest-cni-696683 kubelet[683]: I1121 14:40:49.053961     683 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-696683"
	Nov 21 14:40:49 newest-cni-696683 kubelet[683]: E1121 14:40:49.059953     683 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-696683\" already exists" pod="kube-system/kube-apiserver-newest-cni-696683"
	Nov 21 14:40:49 newest-cni-696683 kubelet[683]: I1121 14:40:49.081504     683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98b995f3-7968-4e19-abc1-10772001bd6c-xtables-lock\") pod \"kindnet-m6v5n\" (UID: \"98b995f3-7968-4e19-abc1-10772001bd6c\") " pod="kube-system/kindnet-m6v5n"
	Nov 21 14:40:49 newest-cni-696683 kubelet[683]: I1121 14:40:49.081542     683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98b995f3-7968-4e19-abc1-10772001bd6c-lib-modules\") pod \"kindnet-m6v5n\" (UID: \"98b995f3-7968-4e19-abc1-10772001bd6c\") " pod="kube-system/kindnet-m6v5n"
	Nov 21 14:40:49 newest-cni-696683 kubelet[683]: I1121 14:40:49.081582     683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/98b995f3-7968-4e19-abc1-10772001bd6c-cni-cfg\") pod \"kindnet-m6v5n\" (UID: \"98b995f3-7968-4e19-abc1-10772001bd6c\") " pod="kube-system/kindnet-m6v5n"
	Nov 21 14:40:49 newest-cni-696683 kubelet[683]: I1121 14:40:49.081636     683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13ba7b82-bf92-4b76-a812-685c12ecb21c-lib-modules\") pod \"kube-proxy-2dkdg\" (UID: \"13ba7b82-bf92-4b76-a812-685c12ecb21c\") " pod="kube-system/kube-proxy-2dkdg"
	Nov 21 14:40:49 newest-cni-696683 kubelet[683]: I1121 14:40:49.081673     683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13ba7b82-bf92-4b76-a812-685c12ecb21c-xtables-lock\") pod \"kube-proxy-2dkdg\" (UID: \"13ba7b82-bf92-4b76-a812-685c12ecb21c\") " pod="kube-system/kube-proxy-2dkdg"
	Nov 21 14:40:50 newest-cni-696683 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 14:40:50 newest-cni-696683 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 14:40:50 newest-cni-696683 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-696683 -n newest-cni-696683
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-696683 -n newest-cni-696683: exit status 2 (362.781235ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-696683 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-ncl4f storage-provisioner dashboard-metrics-scraper-6ffb444bf9-snb4j kubernetes-dashboard-855c9754f9-n4fn8
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-696683 describe pod coredns-66bc5c9577-ncl4f storage-provisioner dashboard-metrics-scraper-6ffb444bf9-snb4j kubernetes-dashboard-855c9754f9-n4fn8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-696683 describe pod coredns-66bc5c9577-ncl4f storage-provisioner dashboard-metrics-scraper-6ffb444bf9-snb4j kubernetes-dashboard-855c9754f9-n4fn8: exit status 1 (78.474909ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-ncl4f" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-snb4j" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-n4fn8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-696683 describe pod coredns-66bc5c9577-ncl4f storage-provisioner dashboard-metrics-scraper-6ffb444bf9-snb4j kubernetes-dashboard-855c9754f9-n4fn8: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-696683
helpers_test.go:243: (dbg) docker inspect newest-cni-696683:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5aacf10261f216094844f8e03cf2bc73194284e3d15f0368fc3a8ff21809591d",
	        "Created": "2025-11-21T14:40:09.858539205Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 280415,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:40:39.963886418Z",
	            "FinishedAt": "2025-11-21T14:40:38.952434797Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/5aacf10261f216094844f8e03cf2bc73194284e3d15f0368fc3a8ff21809591d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5aacf10261f216094844f8e03cf2bc73194284e3d15f0368fc3a8ff21809591d/hostname",
	        "HostsPath": "/var/lib/docker/containers/5aacf10261f216094844f8e03cf2bc73194284e3d15f0368fc3a8ff21809591d/hosts",
	        "LogPath": "/var/lib/docker/containers/5aacf10261f216094844f8e03cf2bc73194284e3d15f0368fc3a8ff21809591d/5aacf10261f216094844f8e03cf2bc73194284e3d15f0368fc3a8ff21809591d-json.log",
	        "Name": "/newest-cni-696683",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-696683:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-696683",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5aacf10261f216094844f8e03cf2bc73194284e3d15f0368fc3a8ff21809591d",
	                "LowerDir": "/var/lib/docker/overlay2/655e2907b15a841ba8d7c09b0eecf0c4c7a490c173b62e8a174062781efe4d9f-init/diff:/var/lib/docker/overlay2/52a35d389e2d282a009a54c52e3f2c3f22a4d7ab5a2644a29d16127d21682576/diff",
	                "MergedDir": "/var/lib/docker/overlay2/655e2907b15a841ba8d7c09b0eecf0c4c7a490c173b62e8a174062781efe4d9f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/655e2907b15a841ba8d7c09b0eecf0c4c7a490c173b62e8a174062781efe4d9f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/655e2907b15a841ba8d7c09b0eecf0c4c7a490c173b62e8a174062781efe4d9f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-696683",
	                "Source": "/var/lib/docker/volumes/newest-cni-696683/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-696683",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-696683",
	                "name.minikube.sigs.k8s.io": "newest-cni-696683",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "07af311bb555603fa47ba27896c6060e64b510d8a78421146c568859c13cf876",
	            "SandboxKey": "/var/run/docker/netns/07af311bb555",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-696683": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3b7fce235b16a39fb4cd51190508048f90b9443938b78208046c510cbfbee936",
	                    "EndpointID": "0680bdf179228b30d99a20f9ad54df55fc62dd2f35a770ce8a30c584b0b497ca",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "76:2b:1e:e5:a7:9d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-696683",
	                        "5aacf10261f2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-696683 -n newest-cni-696683
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-696683 -n newest-cni-696683: exit status 2 (348.178844ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-696683 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-696683 logs -n 25: (1.134698689s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p kubernetes-upgrade-214044 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-214044    │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:39 UTC │
	│ addons  │ enable metrics-server -p embed-certs-441390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ image   │ no-preload-589411 image list --format=json                                                                                                                                                                                                    │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:39 UTC │
	│ pause   │ -p no-preload-589411 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ stop    │ -p embed-certs-441390 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p cert-expiration-046125 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-046125       │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:40 UTC │
	│ delete  │ -p kubernetes-upgrade-214044                                                                                                                                                                                                                  │ kubernetes-upgrade-214044    │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:40 UTC │
	│ delete  │ -p disable-driver-mounts-708207                                                                                                                                                                                                               │ disable-driver-mounts-708207 │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p default-k8s-diff-port-859276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-859276 │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	│ delete  │ -p no-preload-589411                                                                                                                                                                                                                          │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ delete  │ -p cert-expiration-046125                                                                                                                                                                                                                     │ cert-expiration-046125       │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ delete  │ -p no-preload-589411                                                                                                                                                                                                                          │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p newest-cni-696683 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p auto-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-441390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p embed-certs-441390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ addons  │ enable metrics-server -p newest-cni-696683 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	│ stop    │ -p newest-cni-696683 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ addons  │ enable dashboard -p newest-cni-696683 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p newest-cni-696683 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ image   │ newest-cni-696683 image list --format=json                                                                                                                                                                                                    │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ pause   │ -p newest-cni-696683 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	│ image   │ embed-certs-441390 image list --format=json                                                                                                                                                                                                   │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ pause   │ -p embed-certs-441390 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	│ ssh     │ -p auto-989875 pgrep -a kubelet                                                                                                                                                                                                               │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:40:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:40:39.698658  280056 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:40:39.699061  280056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:40:39.699072  280056 out.go:374] Setting ErrFile to fd 2...
	I1121 14:40:39.699078  280056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:40:39.699382  280056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:40:39.700016  280056 out.go:368] Setting JSON to false
	I1121 14:40:39.701601  280056 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4989,"bootTime":1763731051,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:40:39.701719  280056 start.go:143] virtualization: kvm guest
	I1121 14:40:39.703395  280056 out.go:179] * [newest-cni-696683] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:40:39.705010  280056 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:40:39.705085  280056 notify.go:221] Checking for updates...
	I1121 14:40:39.709543  280056 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:40:39.710889  280056 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:40:39.711654  280056 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 14:40:39.712608  280056 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:40:39.202164  269911 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:39.202179  269911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:40:39.202225  269911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989875
	I1121 14:40:39.203262  269911 addons.go:239] Setting addon default-storageclass=true in "auto-989875"
	I1121 14:40:39.203302  269911 host.go:66] Checking if "auto-989875" exists ...
	I1121 14:40:39.203757  269911 cli_runner.go:164] Run: docker container inspect auto-989875 --format={{.State.Status}}
	I1121 14:40:39.231112  269911 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:39.231135  269911 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:40:39.231188  269911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989875
	I1121 14:40:39.231351  269911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/auto-989875/id_rsa Username:docker}
	I1121 14:40:39.253202  269911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/auto-989875/id_rsa Username:docker}
	I1121 14:40:39.264883  269911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:40:39.321852  269911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:40:39.368717  269911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:39.374920  269911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:39.469946  269911 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1121 14:40:39.473299  269911 node_ready.go:35] waiting up to 15m0s for node "auto-989875" to be "Ready" ...
	I1121 14:40:39.713942  269911 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:40:39.714030  280056 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:40:39.715503  280056 config.go:182] Loaded profile config "newest-cni-696683": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:39.715983  280056 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:40:39.743756  280056 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:40:39.743915  280056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:40:39.815430  280056 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:40:39.803326466 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:40:39.815546  280056 docker.go:319] overlay module found
	I1121 14:40:39.816656  280056 out.go:179] * Using the docker driver based on existing profile
	I1121 14:40:39.817754  280056 start.go:309] selected driver: docker
	I1121 14:40:39.817774  280056 start.go:930] validating driver "docker" against &{Name:newest-cni-696683 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:40:39.817892  280056 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:40:39.818542  280056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:40:39.891888  280056 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:40:39.880844572 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:40:39.892243  280056 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1121 14:40:39.892278  280056 cni.go:84] Creating CNI manager for ""
	I1121 14:40:39.892328  280056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:40:39.892373  280056 start.go:353] cluster config:
	{Name:newest-cni-696683 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:40:39.894009  280056 out.go:179] * Starting "newest-cni-696683" primary control-plane node in "newest-cni-696683" cluster
	I1121 14:40:39.894992  280056 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:40:39.895975  280056 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:40:39.896900  280056 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:40:39.896944  280056 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1121 14:40:39.896959  280056 cache.go:65] Caching tarball of preloaded images
	I1121 14:40:39.896995  280056 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:40:39.897060  280056 preload.go:238] Found /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1121 14:40:39.897075  280056 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 14:40:39.897184  280056 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/config.json ...
	I1121 14:40:39.918549  280056 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:40:39.918582  280056 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:40:39.918603  280056 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:40:39.918629  280056 start.go:360] acquireMachinesLock for newest-cni-696683: {Name:mk685873e16cf8d4315d67b3bf50f89f3c32618f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:40:39.918691  280056 start.go:364] duration metric: took 39.301µs to acquireMachinesLock for "newest-cni-696683"
	I1121 14:40:39.918713  280056 start.go:96] Skipping create...Using existing machine configuration
	I1121 14:40:39.918723  280056 fix.go:54] fixHost starting: 
	I1121 14:40:39.918941  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:39.939232  280056 fix.go:112] recreateIfNeeded on newest-cni-696683: state=Stopped err=<nil>
	W1121 14:40:39.939257  280056 fix.go:138] unexpected machine state, will restart: <nil>
	W1121 14:40:37.195055  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	W1121 14:40:39.196240  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	I1121 14:40:39.715037  269911 addons.go:530] duration metric: took 541.90535ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:40:39.974357  269911 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-989875" context rescaled to 1 replicas
	W1121 14:40:41.476592  269911 node_ready.go:57] node "auto-989875" has "Ready":"False" status (will retry)
	I1121 14:40:39.940709  280056 out.go:252] * Restarting existing docker container for "newest-cni-696683" ...
	I1121 14:40:39.940774  280056 cli_runner.go:164] Run: docker start newest-cni-696683
	I1121 14:40:40.204292  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:40.225047  280056 kic.go:430] container "newest-cni-696683" state is running.
	I1121 14:40:40.225352  280056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-696683
	I1121 14:40:40.245950  280056 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/config.json ...
	I1121 14:40:40.246193  280056 machine.go:94] provisionDockerMachine start ...
	I1121 14:40:40.246264  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:40.266155  280056 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:40.266469  280056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1121 14:40:40.266487  280056 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:40:40.267187  280056 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58902->127.0.0.1:33099: read: connection reset by peer
	I1121 14:40:43.397206  280056 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-696683
	
	I1121 14:40:43.397237  280056 ubuntu.go:182] provisioning hostname "newest-cni-696683"
	I1121 14:40:43.397300  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:43.416243  280056 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:43.416538  280056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1121 14:40:43.416568  280056 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-696683 && echo "newest-cni-696683" | sudo tee /etc/hostname
	I1121 14:40:43.552946  280056 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-696683
	
	I1121 14:40:43.553020  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:43.570469  280056 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:43.570726  280056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1121 14:40:43.570747  280056 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-696683' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-696683/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-696683' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:40:43.699459  280056 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:40:43.699487  280056 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11045/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11045/.minikube}
	I1121 14:40:43.699509  280056 ubuntu.go:190] setting up certificates
	I1121 14:40:43.699518  280056 provision.go:84] configureAuth start
	I1121 14:40:43.699572  280056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-696683
	I1121 14:40:43.716911  280056 provision.go:143] copyHostCerts
	I1121 14:40:43.716971  280056 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem, removing ...
	I1121 14:40:43.716988  280056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem
	I1121 14:40:43.717063  280056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem (1078 bytes)
	I1121 14:40:43.717170  280056 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem, removing ...
	I1121 14:40:43.717182  280056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem
	I1121 14:40:43.717225  280056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem (1123 bytes)
	I1121 14:40:43.717301  280056 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem, removing ...
	I1121 14:40:43.717311  280056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem
	I1121 14:40:43.717354  280056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem (1679 bytes)
	I1121 14:40:43.717424  280056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem org=jenkins.newest-cni-696683 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-696683]
	I1121 14:40:43.898083  280056 provision.go:177] copyRemoteCerts
	I1121 14:40:43.898146  280056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:40:43.898203  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:43.915505  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:44.009431  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:40:44.026983  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1121 14:40:44.043724  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 14:40:44.059839  280056 provision.go:87] duration metric: took 360.308976ms to configureAuth
	I1121 14:40:44.059858  280056 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:40:44.060029  280056 config.go:182] Loaded profile config "newest-cni-696683": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:44.060145  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:44.078061  280056 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:44.078262  280056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1121 14:40:44.078281  280056 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:40:44.359271  280056 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:40:44.359300  280056 machine.go:97] duration metric: took 4.113090842s to provisionDockerMachine
	I1121 14:40:44.359333  280056 start.go:293] postStartSetup for "newest-cni-696683" (driver="docker")
	I1121 14:40:44.359359  280056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:40:44.359441  280056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:40:44.359503  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:44.377727  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:44.471221  280056 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:40:44.474531  280056 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:40:44.474582  280056 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:40:44.474595  280056 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/addons for local assets ...
	I1121 14:40:44.474657  280056 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/files for local assets ...
	I1121 14:40:44.474769  280056 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem -> 145422.pem in /etc/ssl/certs
	I1121 14:40:44.474885  280056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:40:44.482193  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:40:44.498766  280056 start.go:296] duration metric: took 139.419384ms for postStartSetup
	I1121 14:40:44.498841  280056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:40:44.498885  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:44.516283  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:44.607254  280056 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:40:44.611996  280056 fix.go:56] duration metric: took 4.693269423s for fixHost
	I1121 14:40:44.612015  280056 start.go:83] releasing machines lock for "newest-cni-696683", held for 4.693312828s
	I1121 14:40:44.612074  280056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-696683
	I1121 14:40:44.629484  280056 ssh_runner.go:195] Run: cat /version.json
	I1121 14:40:44.629530  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:44.629596  280056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:40:44.629660  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:44.646651  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:44.647257  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	W1121 14:40:41.693191  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	W1121 14:40:43.693977  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	I1121 14:40:44.789269  280056 ssh_runner.go:195] Run: systemctl --version
	I1121 14:40:44.795157  280056 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:40:44.829469  280056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:40:44.833726  280056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:40:44.833770  280056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:40:44.841442  280056 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 14:40:44.841462  280056 start.go:496] detecting cgroup driver to use...
	I1121 14:40:44.841500  280056 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:40:44.841546  280056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:40:44.855704  280056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:40:44.867322  280056 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:40:44.867355  280056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:40:44.880286  280056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:40:44.891778  280056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:40:44.971173  280056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:40:45.053340  280056 docker.go:234] disabling docker service ...
	I1121 14:40:45.053430  280056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:40:45.066798  280056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:40:45.078751  280056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:40:45.158914  280056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:40:45.236074  280056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:40:45.247464  280056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:40:45.260830  280056 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 14:40:45.260881  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.268922  280056 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1121 14:40:45.268972  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.276871  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.284760  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.292909  280056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:40:45.300239  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.308497  280056 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.316091  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.324294  280056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:40:45.330973  280056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:40:45.337651  280056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:40:45.412162  280056 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:40:45.548953  280056 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:40:45.549022  280056 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:40:45.552808  280056 start.go:564] Will wait 60s for crictl version
	I1121 14:40:45.552866  280056 ssh_runner.go:195] Run: which crictl
	I1121 14:40:45.556653  280056 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:40:45.580611  280056 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:40:45.580686  280056 ssh_runner.go:195] Run: crio --version
	I1121 14:40:45.607820  280056 ssh_runner.go:195] Run: crio --version
	I1121 14:40:45.636081  280056 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 14:40:45.637049  280056 cli_runner.go:164] Run: docker network inspect newest-cni-696683 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:40:45.652698  280056 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:40:45.656512  280056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:40:45.667700  280056 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1121 14:40:45.668667  280056 kubeadm.go:884] updating cluster {Name:newest-cni-696683 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:40:45.668785  280056 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:40:45.668828  280056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:40:45.700321  280056 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:40:45.700343  280056 crio.go:433] Images already preloaded, skipping extraction
	I1121 14:40:45.700378  280056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:40:45.724113  280056 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:40:45.724131  280056 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:40:45.724139  280056 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1121 14:40:45.724223  280056 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-696683 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:40:45.724281  280056 ssh_runner.go:195] Run: crio config
	I1121 14:40:45.769317  280056 cni.go:84] Creating CNI manager for ""
	I1121 14:40:45.769335  280056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:40:45.769351  280056 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1121 14:40:45.769371  280056 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-696683 NodeName:newest-cni-696683 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:40:45.769497  280056 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-696683"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:40:45.769548  280056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:40:45.777468  280056 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:40:45.777525  280056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:40:45.785019  280056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1121 14:40:45.796834  280056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:40:45.808433  280056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1121 14:40:45.820149  280056 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:40:45.823519  280056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:40:45.832775  280056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:40:45.917710  280056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:40:45.942977  280056 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683 for IP: 192.168.85.2
	I1121 14:40:45.942996  280056 certs.go:195] generating shared ca certs ...
	I1121 14:40:45.943016  280056 certs.go:227] acquiring lock for ca certs: {Name:mkde3a7d6f17b238f06eab3a140993599f1b4367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:45.943143  280056 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key
	I1121 14:40:45.943197  280056 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key
	I1121 14:40:45.943209  280056 certs.go:257] generating profile certs ...
	I1121 14:40:45.943287  280056 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/client.key
	I1121 14:40:45.943338  280056 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.key.78303e51
	I1121 14:40:45.943372  280056 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/proxy-client.key
	I1121 14:40:45.943471  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem (1338 bytes)
	W1121 14:40:45.943505  280056 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542_empty.pem, impossibly tiny 0 bytes
	I1121 14:40:45.943516  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:40:45.943543  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:40:45.943582  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:40:45.943611  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem (1679 bytes)
	I1121 14:40:45.943651  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:40:45.944261  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:40:45.962656  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:40:45.981773  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:40:46.000183  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1121 14:40:46.026245  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 14:40:46.046663  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:40:46.062648  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:40:46.079837  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:40:46.096146  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:40:46.112465  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem --> /usr/share/ca-certificates/14542.pem (1338 bytes)
	I1121 14:40:46.128984  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /usr/share/ca-certificates/145422.pem (1708 bytes)
	I1121 14:40:46.145773  280056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:40:46.157581  280056 ssh_runner.go:195] Run: openssl version
	I1121 14:40:46.163196  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145422.pem && ln -fs /usr/share/ca-certificates/145422.pem /etc/ssl/certs/145422.pem"
	I1121 14:40:46.171390  280056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145422.pem
	I1121 14:40:46.174733  280056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145422.pem
	I1121 14:40:46.174777  280056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145422.pem
	I1121 14:40:46.211212  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145422.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:40:46.218830  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:40:46.226780  280056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:46.230239  280056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:46.230281  280056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:46.264064  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:40:46.271501  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14542.pem && ln -fs /usr/share/ca-certificates/14542.pem /etc/ssl/certs/14542.pem"
	I1121 14:40:46.279591  280056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14542.pem
	I1121 14:40:46.282952  280056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14542.pem
	I1121 14:40:46.282984  280056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14542.pem
	I1121 14:40:46.316214  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14542.pem /etc/ssl/certs/51391683.0"
	I1121 14:40:46.323317  280056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:40:46.327082  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 14:40:46.362145  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 14:40:46.397494  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 14:40:46.432068  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 14:40:46.476192  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 14:40:46.524752  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 14:40:46.572490  280056 kubeadm.go:401] StartCluster: {Name:newest-cni-696683 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:40:46.572631  280056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:40:46.572688  280056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:40:46.608977  280056 cri.go:89] found id: "15917aa53c8197587b8ebdb80d10a679b2a7abe6ff5a81b0d4f5a42900e02412"
	I1121 14:40:46.609002  280056 cri.go:89] found id: "76d7dc76ff36d3fb6387582649e9fc04ab3c1cbd059a19721997d005f5434abc"
	I1121 14:40:46.609007  280056 cri.go:89] found id: "ecf45bf1d37d86d0c9346ae4b4f597dfe7f80bbc47df49bb0994a548a8922b4b"
	I1121 14:40:46.609011  280056 cri.go:89] found id: "958b1593ef47f88f59d980553b03bdcf6b5f2c94efadd777421a6a497aa6ba37"
	I1121 14:40:46.609015  280056 cri.go:89] found id: ""
	I1121 14:40:46.609064  280056 ssh_runner.go:195] Run: sudo runc list -f json
	W1121 14:40:46.623457  280056 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:40:46Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:40:46.623543  280056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:40:46.631552  280056 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 14:40:46.631604  280056 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 14:40:46.631642  280056 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 14:40:46.639112  280056 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:40:46.640392  280056 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-696683" does not appear in /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:40:46.641294  280056 kubeconfig.go:62] /home/jenkins/minikube-integration/21847-11045/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-696683" cluster setting kubeconfig missing "newest-cni-696683" context setting]
	I1121 14:40:46.642656  280056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:46.644901  280056 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 14:40:46.652585  280056 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1121 14:40:46.652614  280056 kubeadm.go:602] duration metric: took 21.003413ms to restartPrimaryControlPlane
	I1121 14:40:46.652622  280056 kubeadm.go:403] duration metric: took 80.144736ms to StartCluster
	I1121 14:40:46.652645  280056 settings.go:142] acquiring lock: {Name:mkb207cf001a407898b2dbfd9fb9b3881f173a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:46.652695  280056 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:40:46.655150  280056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:46.655378  280056 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:40:46.655488  280056 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:40:46.655593  280056 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-696683"
	I1121 14:40:46.655610  280056 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-696683"
	W1121 14:40:46.655619  280056 addons.go:248] addon storage-provisioner should already be in state true
	I1121 14:40:46.655632  280056 config.go:182] Loaded profile config "newest-cni-696683": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:46.655645  280056 host.go:66] Checking if "newest-cni-696683" exists ...
	I1121 14:40:46.655665  280056 addons.go:70] Setting dashboard=true in profile "newest-cni-696683"
	I1121 14:40:46.655693  280056 addons.go:239] Setting addon dashboard=true in "newest-cni-696683"
	W1121 14:40:46.655703  280056 addons.go:248] addon dashboard should already be in state true
	I1121 14:40:46.655689  280056 addons.go:70] Setting default-storageclass=true in profile "newest-cni-696683"
	I1121 14:40:46.655739  280056 host.go:66] Checking if "newest-cni-696683" exists ...
	I1121 14:40:46.655746  280056 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-696683"
	I1121 14:40:46.656081  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:46.656134  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:46.656263  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:46.659699  280056 out.go:179] * Verifying Kubernetes components...
	I1121 14:40:46.660933  280056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:40:46.682004  280056 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1121 14:40:46.682004  280056 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:40:46.682500  280056 addons.go:239] Setting addon default-storageclass=true in "newest-cni-696683"
	W1121 14:40:46.682522  280056 addons.go:248] addon default-storageclass should already be in state true
	I1121 14:40:46.682547  280056 host.go:66] Checking if "newest-cni-696683" exists ...
	I1121 14:40:46.683001  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:46.686740  280056 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:46.686759  280056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:40:46.686806  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:46.688141  280056 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1121 14:40:43.976021  269911 node_ready.go:57] node "auto-989875" has "Ready":"False" status (will retry)
	W1121 14:40:45.976308  269911 node_ready.go:57] node "auto-989875" has "Ready":"False" status (will retry)
	I1121 14:40:46.689188  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1121 14:40:46.689209  280056 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1121 14:40:46.689271  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:46.713217  280056 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:46.713242  280056 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:40:46.713295  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:46.720516  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:46.724111  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:46.739551  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:46.802053  280056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:40:46.815536  280056 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:40:46.815609  280056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:40:46.828043  280056 api_server.go:72] duration metric: took 172.633997ms to wait for apiserver process to appear ...
	I1121 14:40:46.828064  280056 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:40:46.828080  280056 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:40:46.838809  280056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:46.840678  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1121 14:40:46.840695  280056 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1121 14:40:46.852409  280056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:46.856391  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1121 14:40:46.856410  280056 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1121 14:40:46.871966  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1121 14:40:46.871983  280056 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1121 14:40:46.887375  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1121 14:40:46.887424  280056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1121 14:40:46.902141  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1121 14:40:46.902162  280056 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1121 14:40:46.917178  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1121 14:40:46.917195  280056 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1121 14:40:46.930976  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1121 14:40:46.930993  280056 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1121 14:40:46.944066  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1121 14:40:46.944083  280056 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1121 14:40:46.956286  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1121 14:40:46.956305  280056 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1121 14:40:46.968997  280056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1121 14:40:48.462794  280056 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1121 14:40:48.462825  280056 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1121 14:40:48.462841  280056 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:40:48.469024  280056 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1121 14:40:48.469051  280056 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1121 14:40:48.829162  280056 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:40:48.834337  280056 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 14:40:48.834367  280056 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 14:40:48.963574  280056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.124616797s)
	I1121 14:40:48.963650  280056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.111184837s)
	I1121 14:40:48.963723  280056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.994699389s)
	I1121 14:40:48.965217  280056 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-696683 addons enable metrics-server
	
	I1121 14:40:48.973715  280056 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1121 14:40:48.974963  280056 addons.go:530] duration metric: took 2.319478862s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1121 14:40:49.328711  280056 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:40:49.333400  280056 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 14:40:49.333420  280056 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 14:40:49.829132  280056 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:40:49.833697  280056 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1121 14:40:49.834649  280056 api_server.go:141] control plane version: v1.34.1
	I1121 14:40:49.834670  280056 api_server.go:131] duration metric: took 3.006599871s to wait for apiserver health ...
	I1121 14:40:49.834678  280056 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:40:49.838227  280056 system_pods.go:59] 8 kube-system pods found
	I1121 14:40:49.838263  280056 system_pods.go:61] "coredns-66bc5c9577-ncl4f" [93a097a2-31da-4456-8435-e1a976f3d7f9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1121 14:40:49.838273  280056 system_pods.go:61] "etcd-newest-cni-696683" [113e31f1-f22b-4ed8-adcb-8c12d55e1f4c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 14:40:49.838286  280056 system_pods.go:61] "kindnet-m6v5n" [98b995f3-7968-4e19-abc1-10772001bd6c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1121 14:40:49.838301  280056 system_pods.go:61] "kube-apiserver-newest-cni-696683" [a046bba0-991c-4291-b89a-a0e64e3686b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 14:40:49.838311  280056 system_pods.go:61] "kube-controller-manager-newest-cni-696683" [dd3689f1-9ccf-4bca-8147-1779d92c3598] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 14:40:49.838318  280056 system_pods.go:61] "kube-proxy-2dkdg" [13ba7b82-bf92-4b76-a812-685c12ecb21c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1121 14:40:49.838331  280056 system_pods.go:61] "kube-scheduler-newest-cni-696683" [57fd312e-bc77-4ecb-9f3b-caa50247e033] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 14:40:49.838337  280056 system_pods.go:61] "storage-provisioner" [3cf44ed4-4cd8-4655-aef5-38415eb66de4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1121 14:40:49.838351  280056 system_pods.go:74] duration metric: took 3.666864ms to wait for pod list to return data ...
	I1121 14:40:49.838364  280056 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:40:49.840748  280056 default_sa.go:45] found service account: "default"
	I1121 14:40:49.840769  280056 default_sa.go:55] duration metric: took 2.395802ms for default service account to be created ...
	I1121 14:40:49.840783  280056 kubeadm.go:587] duration metric: took 3.185377365s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1121 14:40:49.840808  280056 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:40:49.842953  280056 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:40:49.842978  280056 node_conditions.go:123] node cpu capacity is 8
	I1121 14:40:49.842993  280056 node_conditions.go:105] duration metric: took 2.175119ms to run NodePressure ...
	I1121 14:40:49.843009  280056 start.go:242] waiting for startup goroutines ...
	I1121 14:40:49.843022  280056 start.go:247] waiting for cluster config update ...
	I1121 14:40:49.843039  280056 start.go:256] writing updated cluster config ...
	I1121 14:40:49.843325  280056 ssh_runner.go:195] Run: rm -f paused
	I1121 14:40:49.887622  280056 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:40:49.890008  280056 out.go:179] * Done! kubectl is now configured to use "newest-cni-696683" cluster and "default" namespace by default
	W1121 14:40:45.694185  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	W1121 14:40:47.694654  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	W1121 14:40:50.194324  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	W1121 14:40:48.477939  269911 node_ready.go:57] node "auto-989875" has "Ready":"False" status (will retry)
	I1121 14:40:50.487901  269911 node_ready.go:49] node "auto-989875" is "Ready"
	I1121 14:40:50.487937  269911 node_ready.go:38] duration metric: took 11.014560663s for node "auto-989875" to be "Ready" ...
	I1121 14:40:50.487951  269911 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:40:50.488000  269911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:40:50.506437  269911 api_server.go:72] duration metric: took 11.333890908s to wait for apiserver process to appear ...
	I1121 14:40:50.506462  269911 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:40:50.506481  269911 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1121 14:40:50.511381  269911 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1121 14:40:50.512348  269911 api_server.go:141] control plane version: v1.34.1
	I1121 14:40:50.512374  269911 api_server.go:131] duration metric: took 5.904455ms to wait for apiserver health ...
	I1121 14:40:50.512385  269911 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:40:50.516900  269911 system_pods.go:59] 8 kube-system pods found
	I1121 14:40:50.516933  269911 system_pods.go:61] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:50.516942  269911 system_pods.go:61] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:50.516954  269911 system_pods.go:61] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:50.516961  269911 system_pods.go:61] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:50.516969  269911 system_pods.go:61] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:50.516975  269911 system_pods.go:61] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:50.516983  269911 system_pods.go:61] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:50.516988  269911 system_pods.go:61] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Pending
	I1121 14:40:50.516995  269911 system_pods.go:74] duration metric: took 4.603561ms to wait for pod list to return data ...
	I1121 14:40:50.517018  269911 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:40:50.519971  269911 default_sa.go:45] found service account: "default"
	I1121 14:40:50.519990  269911 default_sa.go:55] duration metric: took 2.962898ms for default service account to be created ...
	I1121 14:40:50.520000  269911 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:40:50.523136  269911 system_pods.go:86] 8 kube-system pods found
	I1121 14:40:50.523178  269911 system_pods.go:89] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:50.523193  269911 system_pods.go:89] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:50.523202  269911 system_pods.go:89] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:50.523207  269911 system_pods.go:89] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:50.523212  269911 system_pods.go:89] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:50.523218  269911 system_pods.go:89] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:50.523222  269911 system_pods.go:89] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:50.523233  269911 system_pods.go:89] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Pending
	I1121 14:40:50.523254  269911 retry.go:31] will retry after 276.59635ms: missing components: kube-dns
	I1121 14:40:50.803782  269911 system_pods.go:86] 8 kube-system pods found
	I1121 14:40:50.803812  269911 system_pods.go:89] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:50.803820  269911 system_pods.go:89] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:50.803826  269911 system_pods.go:89] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:50.803830  269911 system_pods.go:89] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:50.803843  269911 system_pods.go:89] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:50.803847  269911 system_pods.go:89] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:50.803850  269911 system_pods.go:89] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:50.803854  269911 system_pods.go:89] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:40:50.803868  269911 retry.go:31] will retry after 254.453611ms: missing components: kube-dns
	I1121 14:40:51.063022  269911 system_pods.go:86] 8 kube-system pods found
	I1121 14:40:51.063048  269911 system_pods.go:89] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:51.063054  269911 system_pods.go:89] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:51.063060  269911 system_pods.go:89] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:51.063064  269911 system_pods.go:89] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:51.063070  269911 system_pods.go:89] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:51.063073  269911 system_pods.go:89] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:51.063076  269911 system_pods.go:89] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:51.063080  269911 system_pods.go:89] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:40:51.063093  269911 retry.go:31] will retry after 307.771212ms: missing components: kube-dns
	I1121 14:40:51.375222  269911 system_pods.go:86] 8 kube-system pods found
	I1121 14:40:51.375255  269911 system_pods.go:89] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:51.375268  269911 system_pods.go:89] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:51.375276  269911 system_pods.go:89] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:51.375282  269911 system_pods.go:89] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:51.375288  269911 system_pods.go:89] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:51.375299  269911 system_pods.go:89] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:51.375304  269911 system_pods.go:89] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:51.375315  269911 system_pods.go:89] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:40:51.375332  269911 retry.go:31] will retry after 408.234241ms: missing components: kube-dns
	I1121 14:40:51.790035  269911 system_pods.go:86] 8 kube-system pods found
	I1121 14:40:51.790067  269911 system_pods.go:89] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Running
	I1121 14:40:51.790076  269911 system_pods.go:89] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:51.790082  269911 system_pods.go:89] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:51.790088  269911 system_pods.go:89] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:51.790095  269911 system_pods.go:89] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:51.790101  269911 system_pods.go:89] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:51.790106  269911 system_pods.go:89] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:51.790111  269911 system_pods.go:89] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Running
	I1121 14:40:51.790124  269911 system_pods.go:126] duration metric: took 1.270114943s to wait for k8s-apps to be running ...
	I1121 14:40:51.790137  269911 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:40:51.790190  269911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:40:51.806346  269911 system_svc.go:56] duration metric: took 16.201575ms WaitForService to wait for kubelet
	I1121 14:40:51.806377  269911 kubeadm.go:587] duration metric: took 12.633833991s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:40:51.806402  269911 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:40:51.808958  269911 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:40:51.808980  269911 node_conditions.go:123] node cpu capacity is 8
	I1121 14:40:51.808992  269911 node_conditions.go:105] duration metric: took 2.584392ms to run NodePressure ...
	I1121 14:40:51.809003  269911 start.go:242] waiting for startup goroutines ...
	I1121 14:40:51.809009  269911 start.go:247] waiting for cluster config update ...
	I1121 14:40:51.809019  269911 start.go:256] writing updated cluster config ...
	I1121 14:40:51.809271  269911 ssh_runner.go:195] Run: rm -f paused
	I1121 14:40:51.812826  269911 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:40:51.816346  269911 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r6m4z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.820311  269911 pod_ready.go:94] pod "coredns-66bc5c9577-r6m4z" is "Ready"
	I1121 14:40:51.820332  269911 pod_ready.go:86] duration metric: took 3.96803ms for pod "coredns-66bc5c9577-r6m4z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.822259  269911 pod_ready.go:83] waiting for pod "etcd-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.826005  269911 pod_ready.go:94] pod "etcd-auto-989875" is "Ready"
	I1121 14:40:51.826024  269911 pod_ready.go:86] duration metric: took 3.74738ms for pod "etcd-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.827872  269911 pod_ready.go:83] waiting for pod "kube-apiserver-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.831284  269911 pod_ready.go:94] pod "kube-apiserver-auto-989875" is "Ready"
	I1121 14:40:51.831303  269911 pod_ready.go:86] duration metric: took 3.411512ms for pod "kube-apiserver-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.833002  269911 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:52.217619  269911 pod_ready.go:94] pod "kube-controller-manager-auto-989875" is "Ready"
	I1121 14:40:52.217641  269911 pod_ready.go:86] duration metric: took 384.619243ms for pod "kube-controller-manager-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:52.417771  269911 pod_ready.go:83] waiting for pod "kube-proxy-ttpnr" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:52.816803  269911 pod_ready.go:94] pod "kube-proxy-ttpnr" is "Ready"
	I1121 14:40:52.816827  269911 pod_ready.go:86] duration metric: took 399.031224ms for pod "kube-proxy-ttpnr" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:53.017303  269911 pod_ready.go:83] waiting for pod "kube-scheduler-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:53.417158  269911 pod_ready.go:94] pod "kube-scheduler-auto-989875" is "Ready"
	I1121 14:40:53.417179  269911 pod_ready.go:86] duration metric: took 399.853474ms for pod "kube-scheduler-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:53.417190  269911 pod_ready.go:40] duration metric: took 1.604337241s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:40:53.465649  269911 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:40:53.467062  269911 out.go:179] * Done! kubectl is now configured to use "auto-989875" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.313230292Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.316060617Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a7da7364-2ae0-49d4-ab49-01fe8f125177 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.3167441Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=efb40c11-6d69-4fe0-8e74-27c0c88d66e9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.317403465Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.318095844Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.3182654Z" level=info msg="Ran pod sandbox b559d46ec55a8b696aaaea12a4865670e6784eac3e85e212f4cda273037d763f with infra container: kube-system/kube-proxy-2dkdg/POD" id=a7da7364-2ae0-49d4-ab49-01fe8f125177 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.318954176Z" level=info msg="Ran pod sandbox 07ccd1b0e4b0b72c7b0f5c73817a0f74e3e0602b5c90de1ab77b00fbbf9e0b23 with infra container: kube-system/kindnet-m6v5n/POD" id=efb40c11-6d69-4fe0-8e74-27c0c88d66e9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.319153696Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=c496c6e9-0806-4274-9fbb-f8b2526d3c09 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.321632025Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=1f78551a-e495-4fca-82f5-90288c5f0064 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.321669382Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=9564d893-05a0-45e3-9ad2-55c34d79ae0c name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.323089677Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=271fe86c-08ac-41bd-b8d2-5132319d40a7 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.323141784Z" level=info msg="Creating container: kube-system/kube-proxy-2dkdg/kube-proxy" id=6a60a860-097d-4855-ae95-89f13a5d02f3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.323263086Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.324044509Z" level=info msg="Creating container: kube-system/kindnet-m6v5n/kindnet-cni" id=b832a5f5-9f3c-4ffb-a745-e8ff753101a2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.324151139Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.327247867Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.327734214Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.328505544Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.328947408Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.357110495Z" level=info msg="Created container 525ca5ccf8737122b9a66a5b0feccd36cea2deec23560d29575393ada0330762: kube-system/kindnet-m6v5n/kindnet-cni" id=b832a5f5-9f3c-4ffb-a745-e8ff753101a2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.357608451Z" level=info msg="Starting container: 525ca5ccf8737122b9a66a5b0feccd36cea2deec23560d29575393ada0330762" id=844b29f1-18d5-4ac7-a0bd-d80ea5460d84 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.358619073Z" level=info msg="Created container e6409bf9f2c514d529c3416169d40e9f48edb2af837cd54c31176b7c2ee0a3d8: kube-system/kube-proxy-2dkdg/kube-proxy" id=6a60a860-097d-4855-ae95-89f13a5d02f3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.35906394Z" level=info msg="Starting container: e6409bf9f2c514d529c3416169d40e9f48edb2af837cd54c31176b7c2ee0a3d8" id=126768cc-2c4e-467f-95b3-44dfd9144bfc name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.359253187Z" level=info msg="Started container" PID=1058 containerID=525ca5ccf8737122b9a66a5b0feccd36cea2deec23560d29575393ada0330762 description=kube-system/kindnet-m6v5n/kindnet-cni id=844b29f1-18d5-4ac7-a0bd-d80ea5460d84 name=/runtime.v1.RuntimeService/StartContainer sandboxID=07ccd1b0e4b0b72c7b0f5c73817a0f74e3e0602b5c90de1ab77b00fbbf9e0b23
	Nov 21 14:40:49 newest-cni-696683 crio[526]: time="2025-11-21T14:40:49.361438594Z" level=info msg="Started container" PID=1059 containerID=e6409bf9f2c514d529c3416169d40e9f48edb2af837cd54c31176b7c2ee0a3d8 description=kube-system/kube-proxy-2dkdg/kube-proxy id=126768cc-2c4e-467f-95b3-44dfd9144bfc name=/runtime.v1.RuntimeService/StartContainer sandboxID=b559d46ec55a8b696aaaea12a4865670e6784eac3e85e212f4cda273037d763f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	525ca5ccf8737       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   07ccd1b0e4b0b       kindnet-m6v5n                               kube-system
	e6409bf9f2c51       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   5 seconds ago       Running             kube-proxy                1                   b559d46ec55a8       kube-proxy-2dkdg                            kube-system
	15917aa53c819       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 seconds ago       Running             kube-apiserver            1                   6cebd072730a9       kube-apiserver-newest-cni-696683            kube-system
	76d7dc76ff36d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   c9e04a0694739       kube-controller-manager-newest-cni-696683   kube-system
	ecf45bf1d37d8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 seconds ago       Running             etcd                      1                   5ad4a662d4f2b       etcd-newest-cni-696683                      kube-system
	958b1593ef47f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   68afd87728861       kube-scheduler-newest-cni-696683            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-696683
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-696683
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=newest-cni-696683
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_40_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:40:24 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-696683
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:40:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:40:48 +0000   Fri, 21 Nov 2025 14:40:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:40:48 +0000   Fri, 21 Nov 2025 14:40:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:40:48 +0000   Fri, 21 Nov 2025 14:40:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 21 Nov 2025 14:40:48 +0000   Fri, 21 Nov 2025 14:40:23 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-696683
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                eb56864d-718a-4ff0-98f9-3e18a790b305
	  Boot ID:                    4deed74e-d1ae-403f-b303-7338c681df31
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-696683                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-m6v5n                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-newest-cni-696683             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-newest-cni-696683    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-2dkdg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-newest-cni-696683             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21s                kube-proxy       
	  Normal  Starting                 5s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  33s (x8 over 33s)  kubelet          Node newest-cni-696683 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s (x8 over 33s)  kubelet          Node newest-cni-696683 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s (x8 over 33s)  kubelet          Node newest-cni-696683 status is now: NodeHasSufficientPID
	  Normal  Starting                 28s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s                kubelet          Node newest-cni-696683 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s                kubelet          Node newest-cni-696683 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s                kubelet          Node newest-cni-696683 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s                node-controller  Node newest-cni-696683 event: Registered Node newest-cni-696683 in Controller
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-696683 event: Registered Node newest-cni-696683 in Controller
	
	
	==> dmesg <==
	[  +0.087005] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024870] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.407014] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 13:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.052324] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023964] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +2.047715] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +4.031557] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +8.511113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[ +16.382337] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[Nov21 13:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	
	
	==> etcd [ecf45bf1d37d86d0c9346ae4b4f597dfe7f80bbc47df49bb0994a548a8922b4b] <==
	{"level":"warn","ts":"2025-11-21T14:40:47.871970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.877874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.885469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.893888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.901044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.907034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.914422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.920955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.927085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.933543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.940375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.946872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.953216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.959397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.965903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.978257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.984726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:47.997300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:48.004619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:48.010925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:48.016872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:48.038808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:48.045074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:48.051137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:48.097486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59068","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:40:55 up  1:23,  0 user,  load average: 4.40, 3.04, 1.94
	Linux newest-cni-696683 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [525ca5ccf8737122b9a66a5b0feccd36cea2deec23560d29575393ada0330762] <==
	I1121 14:40:49.507935       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:40:49.508187       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1121 14:40:49.508315       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:40:49.508337       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:40:49.508365       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:40:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:40:49.710116       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:40:49.710144       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:40:49.710159       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:40:49.710364       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [15917aa53c8197587b8ebdb80d10a679b2a7abe6ff5a81b0d4f5a42900e02412] <==
	I1121 14:40:48.539611       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1121 14:40:48.539548       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1121 14:40:48.540491       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 14:40:48.539633       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1121 14:40:48.540624       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1121 14:40:48.540639       1 aggregator.go:171] initial CRD sync complete...
	I1121 14:40:48.540655       1 autoregister_controller.go:144] Starting autoregister controller
	I1121 14:40:48.540662       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:40:48.540669       1 cache.go:39] Caches are synced for autoregister controller
	E1121 14:40:48.544876       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1121 14:40:48.547005       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 14:40:48.571191       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:40:48.769079       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 14:40:48.798248       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:40:48.823315       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:40:48.830250       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:40:48.836577       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:40:48.864469       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.13.24"}
	I1121 14:40:48.873605       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.97.77"}
	I1121 14:40:49.442337       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:40:51.874261       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:40:52.225338       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:40:52.225338       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:40:52.423407       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:40:52.423407       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [76d7dc76ff36d3fb6387582649e9fc04ab3c1cbd059a19721997d005f5434abc] <==
	I1121 14:40:51.836043       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 14:40:51.836051       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 14:40:51.841691       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 14:40:51.859197       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1121 14:40:51.862784       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 14:40:51.870157       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1121 14:40:51.870258       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 14:40:51.871306       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 14:40:51.871330       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1121 14:40:51.871343       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 14:40:51.871399       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 14:40:51.871446       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 14:40:51.871487       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:40:51.874257       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 14:40:51.875495       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:40:51.876574       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1121 14:40:51.876900       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 14:40:51.878939       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 14:40:51.881342       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 14:40:51.883247       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 14:40:51.884270       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:40:51.885414       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 14:40:51.887396       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 14:40:51.887645       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:40:51.889398       1 shared_informer.go:356] "Caches are synced" controller="GC"
	
	
	==> kube-proxy [e6409bf9f2c514d529c3416169d40e9f48edb2af837cd54c31176b7c2ee0a3d8] <==
	I1121 14:40:49.392694       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:40:49.472235       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:40:49.572474       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:40:49.572503       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1121 14:40:49.572592       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:40:49.593536       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:40:49.593622       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:40:49.599670       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:40:49.599950       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:40:49.599981       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:40:49.601486       1 config.go:200] "Starting service config controller"
	I1121 14:40:49.601519       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:40:49.601738       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:40:49.601769       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:40:49.601841       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:40:49.601851       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:40:49.601748       1 config.go:309] "Starting node config controller"
	I1121 14:40:49.601889       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:40:49.601900       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:40:49.702345       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:40:49.702501       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:40:49.702523       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [958b1593ef47f88f59d980553b03bdcf6b5f2c94efadd777421a6a497aa6ba37] <==
	I1121 14:40:47.263621       1 serving.go:386] Generated self-signed cert in-memory
	W1121 14:40:48.464162       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1121 14:40:48.464281       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1121 14:40:48.464303       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1121 14:40:48.464314       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1121 14:40:48.495394       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 14:40:48.495429       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:40:48.497322       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:40:48.497357       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:40:48.497610       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 14:40:48.497970       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 14:40:48.598503       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: E1121 14:40:48.053006     683 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-696683\" not found" node="newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: I1121 14:40:48.511642     683 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: E1121 14:40:48.521193     683 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-696683\" already exists" pod="kube-system/kube-apiserver-newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: I1121 14:40:48.521233     683 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: E1121 14:40:48.527351     683 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-696683\" already exists" pod="kube-system/kube-controller-manager-newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: I1121 14:40:48.527383     683 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: E1121 14:40:48.531113     683 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-696683\" already exists" pod="kube-system/kube-scheduler-newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: I1121 14:40:48.531140     683 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: E1121 14:40:48.536015     683 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-696683\" already exists" pod="kube-system/etcd-newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: I1121 14:40:48.573755     683 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: I1121 14:40:48.573855     683 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-696683"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: I1121 14:40:48.573894     683 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 21 14:40:48 newest-cni-696683 kubelet[683]: I1121 14:40:48.574812     683 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 21 14:40:49 newest-cni-696683 kubelet[683]: I1121 14:40:49.005762     683 apiserver.go:52] "Watching apiserver"
	Nov 21 14:40:49 newest-cni-696683 kubelet[683]: I1121 14:40:49.009803     683 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 21 14:40:49 newest-cni-696683 kubelet[683]: I1121 14:40:49.053961     683 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-696683"
	Nov 21 14:40:49 newest-cni-696683 kubelet[683]: E1121 14:40:49.059953     683 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-696683\" already exists" pod="kube-system/kube-apiserver-newest-cni-696683"
	Nov 21 14:40:49 newest-cni-696683 kubelet[683]: I1121 14:40:49.081504     683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98b995f3-7968-4e19-abc1-10772001bd6c-xtables-lock\") pod \"kindnet-m6v5n\" (UID: \"98b995f3-7968-4e19-abc1-10772001bd6c\") " pod="kube-system/kindnet-m6v5n"
	Nov 21 14:40:49 newest-cni-696683 kubelet[683]: I1121 14:40:49.081542     683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98b995f3-7968-4e19-abc1-10772001bd6c-lib-modules\") pod \"kindnet-m6v5n\" (UID: \"98b995f3-7968-4e19-abc1-10772001bd6c\") " pod="kube-system/kindnet-m6v5n"
	Nov 21 14:40:49 newest-cni-696683 kubelet[683]: I1121 14:40:49.081582     683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/98b995f3-7968-4e19-abc1-10772001bd6c-cni-cfg\") pod \"kindnet-m6v5n\" (UID: \"98b995f3-7968-4e19-abc1-10772001bd6c\") " pod="kube-system/kindnet-m6v5n"
	Nov 21 14:40:49 newest-cni-696683 kubelet[683]: I1121 14:40:49.081636     683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13ba7b82-bf92-4b76-a812-685c12ecb21c-lib-modules\") pod \"kube-proxy-2dkdg\" (UID: \"13ba7b82-bf92-4b76-a812-685c12ecb21c\") " pod="kube-system/kube-proxy-2dkdg"
	Nov 21 14:40:49 newest-cni-696683 kubelet[683]: I1121 14:40:49.081673     683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13ba7b82-bf92-4b76-a812-685c12ecb21c-xtables-lock\") pod \"kube-proxy-2dkdg\" (UID: \"13ba7b82-bf92-4b76-a812-685c12ecb21c\") " pod="kube-system/kube-proxy-2dkdg"
	Nov 21 14:40:50 newest-cni-696683 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 14:40:50 newest-cni-696683 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 14:40:50 newest-cni-696683 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-696683 -n newest-cni-696683
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-696683 -n newest-cni-696683: exit status 2 (389.059083ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-696683 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-ncl4f storage-provisioner dashboard-metrics-scraper-6ffb444bf9-snb4j kubernetes-dashboard-855c9754f9-n4fn8
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-696683 describe pod coredns-66bc5c9577-ncl4f storage-provisioner dashboard-metrics-scraper-6ffb444bf9-snb4j kubernetes-dashboard-855c9754f9-n4fn8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-696683 describe pod coredns-66bc5c9577-ncl4f storage-provisioner dashboard-metrics-scraper-6ffb444bf9-snb4j kubernetes-dashboard-855c9754f9-n4fn8: exit status 1 (78.416652ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-ncl4f" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-snb4j" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-n4fn8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-696683 describe pod coredns-66bc5c9577-ncl4f storage-provisioner dashboard-metrics-scraper-6ffb444bf9-snb4j kubernetes-dashboard-855c9754f9-n4fn8: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-441390 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-441390 --alsologtostderr -v=1: exit status 80 (2.539780752s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-441390 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:40:51.276487  282672 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:40:51.276599  282672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:40:51.276604  282672 out.go:374] Setting ErrFile to fd 2...
	I1121 14:40:51.276609  282672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:40:51.276832  282672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:40:51.277112  282672 out.go:368] Setting JSON to false
	I1121 14:40:51.277139  282672 mustload.go:66] Loading cluster: embed-certs-441390
	I1121 14:40:51.277453  282672 config.go:182] Loaded profile config "embed-certs-441390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:51.277819  282672 cli_runner.go:164] Run: docker container inspect embed-certs-441390 --format={{.State.Status}}
	I1121 14:40:51.296699  282672 host.go:66] Checking if "embed-certs-441390" exists ...
	I1121 14:40:51.296925  282672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:40:51.365360  282672 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-21 14:40:51.355686927 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:40:51.365991  282672 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-441390 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1121 14:40:51.367581  282672 out.go:179] * Pausing node embed-certs-441390 ... 
	I1121 14:40:51.368722  282672 host.go:66] Checking if "embed-certs-441390" exists ...
	I1121 14:40:51.369021  282672 ssh_runner.go:195] Run: systemctl --version
	I1121 14:40:51.369068  282672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-441390
	I1121 14:40:51.388359  282672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/embed-certs-441390/id_rsa Username:docker}
	I1121 14:40:51.483379  282672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:40:51.496687  282672 pause.go:52] kubelet running: true
	I1121 14:40:51.496775  282672 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:40:51.661654  282672 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:40:51.661773  282672 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:40:51.731189  282672 cri.go:89] found id: "95767056d7b108c98614d1dd0610b157001ab0f1932ef578fc0f0e6fdf7a90bb"
	I1121 14:40:51.731216  282672 cri.go:89] found id: "fa41668278e8170f60e1dccd430711a3eed075f795e571c30cc83710f2742a90"
	I1121 14:40:51.731222  282672 cri.go:89] found id: "b3a0a42501ce1cf520bdc52efb98a024759ca435ce8b9848519add262264914a"
	I1121 14:40:51.731227  282672 cri.go:89] found id: "81750ffcd002e880aab0fc94c97b3c53e0c4bfc576f3abd469310642ac74e31c"
	I1121 14:40:51.731231  282672 cri.go:89] found id: "d77dc478ac3af84688fd0b17964f2958adba526b379a9beee309ee2ce20ef8ab"
	I1121 14:40:51.731237  282672 cri.go:89] found id: "26a9f970518848d1b900fdd0d942efb823d83d328447dae842211d42697b5a1a"
	I1121 14:40:51.731241  282672 cri.go:89] found id: "89526413a6f7420bdb1189dd04428b799769f4b2d2b5cbf920e078cac420b1ac"
	I1121 14:40:51.731246  282672 cri.go:89] found id: "0760d88ca4d91b1009c8a72ad47ebcb1d7dd0be3b46f0aa30647c629d08bd762"
	I1121 14:40:51.731250  282672 cri.go:89] found id: "ae9c6391b9097ef248f2a7247fad69dec5e3f671efd813eb51b9341624a3d3d5"
	I1121 14:40:51.731257  282672 cri.go:89] found id: "1c1be146e89e0ac17f82a731250519d13aa441f10efd72b55f37ffd6a8766f48"
	I1121 14:40:51.731262  282672 cri.go:89] found id: ""
	I1121 14:40:51.731313  282672 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:40:51.742651  282672 retry.go:31] will retry after 271.05459ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:40:51Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:40:52.014085  282672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:40:52.029183  282672 pause.go:52] kubelet running: false
	I1121 14:40:52.029237  282672 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:40:52.194211  282672 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:40:52.194310  282672 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:40:52.266063  282672 cri.go:89] found id: "95767056d7b108c98614d1dd0610b157001ab0f1932ef578fc0f0e6fdf7a90bb"
	I1121 14:40:52.266085  282672 cri.go:89] found id: "fa41668278e8170f60e1dccd430711a3eed075f795e571c30cc83710f2742a90"
	I1121 14:40:52.266091  282672 cri.go:89] found id: "b3a0a42501ce1cf520bdc52efb98a024759ca435ce8b9848519add262264914a"
	I1121 14:40:52.266095  282672 cri.go:89] found id: "81750ffcd002e880aab0fc94c97b3c53e0c4bfc576f3abd469310642ac74e31c"
	I1121 14:40:52.266099  282672 cri.go:89] found id: "d77dc478ac3af84688fd0b17964f2958adba526b379a9beee309ee2ce20ef8ab"
	I1121 14:40:52.266103  282672 cri.go:89] found id: "26a9f970518848d1b900fdd0d942efb823d83d328447dae842211d42697b5a1a"
	I1121 14:40:52.266106  282672 cri.go:89] found id: "89526413a6f7420bdb1189dd04428b799769f4b2d2b5cbf920e078cac420b1ac"
	I1121 14:40:52.266109  282672 cri.go:89] found id: "0760d88ca4d91b1009c8a72ad47ebcb1d7dd0be3b46f0aa30647c629d08bd762"
	I1121 14:40:52.266113  282672 cri.go:89] found id: "ae9c6391b9097ef248f2a7247fad69dec5e3f671efd813eb51b9341624a3d3d5"
	I1121 14:40:52.266121  282672 cri.go:89] found id: "1c1be146e89e0ac17f82a731250519d13aa441f10efd72b55f37ffd6a8766f48"
	I1121 14:40:52.266125  282672 cri.go:89] found id: ""
	I1121 14:40:52.266179  282672 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:40:52.278106  282672 retry.go:31] will retry after 345.040181ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:40:52Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:40:52.623634  282672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:40:52.635829  282672 pause.go:52] kubelet running: false
	I1121 14:40:52.635873  282672 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:40:52.781868  282672 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:40:52.781952  282672 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:40:52.850034  282672 cri.go:89] found id: "95767056d7b108c98614d1dd0610b157001ab0f1932ef578fc0f0e6fdf7a90bb"
	I1121 14:40:52.850056  282672 cri.go:89] found id: "fa41668278e8170f60e1dccd430711a3eed075f795e571c30cc83710f2742a90"
	I1121 14:40:52.850062  282672 cri.go:89] found id: "b3a0a42501ce1cf520bdc52efb98a024759ca435ce8b9848519add262264914a"
	I1121 14:40:52.850066  282672 cri.go:89] found id: "81750ffcd002e880aab0fc94c97b3c53e0c4bfc576f3abd469310642ac74e31c"
	I1121 14:40:52.850077  282672 cri.go:89] found id: "d77dc478ac3af84688fd0b17964f2958adba526b379a9beee309ee2ce20ef8ab"
	I1121 14:40:52.850085  282672 cri.go:89] found id: "26a9f970518848d1b900fdd0d942efb823d83d328447dae842211d42697b5a1a"
	I1121 14:40:52.850089  282672 cri.go:89] found id: "89526413a6f7420bdb1189dd04428b799769f4b2d2b5cbf920e078cac420b1ac"
	I1121 14:40:52.850093  282672 cri.go:89] found id: "0760d88ca4d91b1009c8a72ad47ebcb1d7dd0be3b46f0aa30647c629d08bd762"
	I1121 14:40:52.850097  282672 cri.go:89] found id: "ae9c6391b9097ef248f2a7247fad69dec5e3f671efd813eb51b9341624a3d3d5"
	I1121 14:40:52.850116  282672 cri.go:89] found id: "1c1be146e89e0ac17f82a731250519d13aa441f10efd72b55f37ffd6a8766f48"
	I1121 14:40:52.850125  282672 cri.go:89] found id: ""
	I1121 14:40:52.850166  282672 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:40:52.862283  282672 retry.go:31] will retry after 603.360825ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:40:52Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:40:53.466755  282672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:40:53.483173  282672 pause.go:52] kubelet running: false
	I1121 14:40:53.483249  282672 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:40:53.663276  282672 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:40:53.663359  282672 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:40:53.736157  282672 cri.go:89] found id: "95767056d7b108c98614d1dd0610b157001ab0f1932ef578fc0f0e6fdf7a90bb"
	I1121 14:40:53.736182  282672 cri.go:89] found id: "fa41668278e8170f60e1dccd430711a3eed075f795e571c30cc83710f2742a90"
	I1121 14:40:53.736191  282672 cri.go:89] found id: "b3a0a42501ce1cf520bdc52efb98a024759ca435ce8b9848519add262264914a"
	I1121 14:40:53.736196  282672 cri.go:89] found id: "81750ffcd002e880aab0fc94c97b3c53e0c4bfc576f3abd469310642ac74e31c"
	I1121 14:40:53.736200  282672 cri.go:89] found id: "d77dc478ac3af84688fd0b17964f2958adba526b379a9beee309ee2ce20ef8ab"
	I1121 14:40:53.736205  282672 cri.go:89] found id: "26a9f970518848d1b900fdd0d942efb823d83d328447dae842211d42697b5a1a"
	I1121 14:40:53.736208  282672 cri.go:89] found id: "89526413a6f7420bdb1189dd04428b799769f4b2d2b5cbf920e078cac420b1ac"
	I1121 14:40:53.736212  282672 cri.go:89] found id: "0760d88ca4d91b1009c8a72ad47ebcb1d7dd0be3b46f0aa30647c629d08bd762"
	I1121 14:40:53.736215  282672 cri.go:89] found id: "ae9c6391b9097ef248f2a7247fad69dec5e3f671efd813eb51b9341624a3d3d5"
	I1121 14:40:53.736229  282672 cri.go:89] found id: "1c1be146e89e0ac17f82a731250519d13aa441f10efd72b55f37ffd6a8766f48"
	I1121 14:40:53.736233  282672 cri.go:89] found id: ""
	I1121 14:40:53.736286  282672 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:40:53.750031  282672 out.go:203] 
	W1121 14:40:53.751155  282672 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:40:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:40:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 14:40:53.751169  282672 out.go:285] * 
	* 
	W1121 14:40:53.755601  282672 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 14:40:53.757289  282672 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-441390 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-441390
helpers_test.go:243: (dbg) docker inspect embed-certs-441390:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0ce231a2efd9649e52b4171740e54a8824b68b993c22eaf04e41cdad5d399c78",
	        "Created": "2025-11-21T14:39:07.796898766Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272230,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:40:12.324804064Z",
	            "FinishedAt": "2025-11-21T14:40:11.144402786Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/0ce231a2efd9649e52b4171740e54a8824b68b993c22eaf04e41cdad5d399c78/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0ce231a2efd9649e52b4171740e54a8824b68b993c22eaf04e41cdad5d399c78/hostname",
	        "HostsPath": "/var/lib/docker/containers/0ce231a2efd9649e52b4171740e54a8824b68b993c22eaf04e41cdad5d399c78/hosts",
	        "LogPath": "/var/lib/docker/containers/0ce231a2efd9649e52b4171740e54a8824b68b993c22eaf04e41cdad5d399c78/0ce231a2efd9649e52b4171740e54a8824b68b993c22eaf04e41cdad5d399c78-json.log",
	        "Name": "/embed-certs-441390",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-441390:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-441390",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0ce231a2efd9649e52b4171740e54a8824b68b993c22eaf04e41cdad5d399c78",
	                "LowerDir": "/var/lib/docker/overlay2/600fd769bdab16b7cfa0c469ccebb67ba68133c5b4bce708cd3a08511bd496b4-init/diff:/var/lib/docker/overlay2/52a35d389e2d282a009a54c52e3f2c3f22a4d7ab5a2644a29d16127d21682576/diff",
	                "MergedDir": "/var/lib/docker/overlay2/600fd769bdab16b7cfa0c469ccebb67ba68133c5b4bce708cd3a08511bd496b4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/600fd769bdab16b7cfa0c469ccebb67ba68133c5b4bce708cd3a08511bd496b4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/600fd769bdab16b7cfa0c469ccebb67ba68133c5b4bce708cd3a08511bd496b4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-441390",
	                "Source": "/var/lib/docker/volumes/embed-certs-441390/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-441390",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-441390",
	                "name.minikube.sigs.k8s.io": "embed-certs-441390",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1616feb3cbffb189fbe9d18492a128fe43525dd11d97b9987610b1e0b6cff695",
	            "SandboxKey": "/var/run/docker/netns/1616feb3cbff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-441390": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e6dc762b4b87807c44de5ce5e6dedcc7963047110765e9594324098021783415",
	                    "EndpointID": "de560efd9ad64dbe7258497f23f60d9b61d0a87aca5fe5e3ff1cc4ca4e688908",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "0e:4b:8c:31:ec:89",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-441390",
	                        "0ce231a2efd9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-441390 -n embed-certs-441390
I1121 14:40:53.796367   14542 config.go:182] Loaded profile config "auto-989875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-441390 -n embed-certs-441390: exit status 2 (352.381512ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-441390 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-441390 logs -n 25: (1.271287927s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p kubernetes-upgrade-214044 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-214044    │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:39 UTC │
	│ addons  │ enable metrics-server -p embed-certs-441390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ image   │ no-preload-589411 image list --format=json                                                                                                                                                                                                    │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:39 UTC │
	│ pause   │ -p no-preload-589411 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ stop    │ -p embed-certs-441390 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p cert-expiration-046125 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-046125       │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:40 UTC │
	│ delete  │ -p kubernetes-upgrade-214044                                                                                                                                                                                                                  │ kubernetes-upgrade-214044    │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:40 UTC │
	│ delete  │ -p disable-driver-mounts-708207                                                                                                                                                                                                               │ disable-driver-mounts-708207 │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p default-k8s-diff-port-859276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-859276 │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	│ delete  │ -p no-preload-589411                                                                                                                                                                                                                          │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ delete  │ -p cert-expiration-046125                                                                                                                                                                                                                     │ cert-expiration-046125       │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ delete  │ -p no-preload-589411                                                                                                                                                                                                                          │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p newest-cni-696683 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p auto-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-441390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p embed-certs-441390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ addons  │ enable metrics-server -p newest-cni-696683 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	│ stop    │ -p newest-cni-696683 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ addons  │ enable dashboard -p newest-cni-696683 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p newest-cni-696683 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ image   │ newest-cni-696683 image list --format=json                                                                                                                                                                                                    │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ pause   │ -p newest-cni-696683 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	│ image   │ embed-certs-441390 image list --format=json                                                                                                                                                                                                   │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ pause   │ -p embed-certs-441390 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	│ ssh     │ -p auto-989875 pgrep -a kubelet                                                                                                                                                                                                               │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:40:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:40:39.698658  280056 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:40:39.699061  280056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:40:39.699072  280056 out.go:374] Setting ErrFile to fd 2...
	I1121 14:40:39.699078  280056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:40:39.699382  280056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:40:39.700016  280056 out.go:368] Setting JSON to false
	I1121 14:40:39.701601  280056 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4989,"bootTime":1763731051,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:40:39.701719  280056 start.go:143] virtualization: kvm guest
	I1121 14:40:39.703395  280056 out.go:179] * [newest-cni-696683] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:40:39.705010  280056 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:40:39.705085  280056 notify.go:221] Checking for updates...
	I1121 14:40:39.709543  280056 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:40:39.710889  280056 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:40:39.711654  280056 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 14:40:39.712608  280056 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:40:39.202164  269911 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:39.202179  269911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:40:39.202225  269911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989875
	I1121 14:40:39.203262  269911 addons.go:239] Setting addon default-storageclass=true in "auto-989875"
	I1121 14:40:39.203302  269911 host.go:66] Checking if "auto-989875" exists ...
	I1121 14:40:39.203757  269911 cli_runner.go:164] Run: docker container inspect auto-989875 --format={{.State.Status}}
	I1121 14:40:39.231112  269911 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:39.231135  269911 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:40:39.231188  269911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989875
	I1121 14:40:39.231351  269911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/auto-989875/id_rsa Username:docker}
	I1121 14:40:39.253202  269911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/auto-989875/id_rsa Username:docker}
	I1121 14:40:39.264883  269911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:40:39.321852  269911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:40:39.368717  269911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:39.374920  269911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:39.469946  269911 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1121 14:40:39.473299  269911 node_ready.go:35] waiting up to 15m0s for node "auto-989875" to be "Ready" ...
	I1121 14:40:39.713942  269911 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:40:39.714030  280056 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:40:39.715503  280056 config.go:182] Loaded profile config "newest-cni-696683": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:39.715983  280056 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:40:39.743756  280056 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:40:39.743915  280056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:40:39.815430  280056 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:40:39.803326466 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:40:39.815546  280056 docker.go:319] overlay module found
	I1121 14:40:39.816656  280056 out.go:179] * Using the docker driver based on existing profile
	I1121 14:40:39.817754  280056 start.go:309] selected driver: docker
	I1121 14:40:39.817774  280056 start.go:930] validating driver "docker" against &{Name:newest-cni-696683 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:40:39.817892  280056 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:40:39.818542  280056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:40:39.891888  280056 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:40:39.880844572 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:40:39.892243  280056 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1121 14:40:39.892278  280056 cni.go:84] Creating CNI manager for ""
	I1121 14:40:39.892328  280056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:40:39.892373  280056 start.go:353] cluster config:
	{Name:newest-cni-696683 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:40:39.894009  280056 out.go:179] * Starting "newest-cni-696683" primary control-plane node in "newest-cni-696683" cluster
	I1121 14:40:39.894992  280056 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:40:39.895975  280056 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:40:39.896900  280056 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:40:39.896944  280056 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1121 14:40:39.896959  280056 cache.go:65] Caching tarball of preloaded images
	I1121 14:40:39.896995  280056 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:40:39.897060  280056 preload.go:238] Found /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1121 14:40:39.897075  280056 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 14:40:39.897184  280056 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/config.json ...
	I1121 14:40:39.918549  280056 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:40:39.918582  280056 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:40:39.918603  280056 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:40:39.918629  280056 start.go:360] acquireMachinesLock for newest-cni-696683: {Name:mk685873e16cf8d4315d67b3bf50f89f3c32618f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:40:39.918691  280056 start.go:364] duration metric: took 39.301µs to acquireMachinesLock for "newest-cni-696683"
	I1121 14:40:39.918713  280056 start.go:96] Skipping create...Using existing machine configuration
	I1121 14:40:39.918723  280056 fix.go:54] fixHost starting: 
	I1121 14:40:39.918941  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:39.939232  280056 fix.go:112] recreateIfNeeded on newest-cni-696683: state=Stopped err=<nil>
	W1121 14:40:39.939257  280056 fix.go:138] unexpected machine state, will restart: <nil>
	W1121 14:40:37.195055  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	W1121 14:40:39.196240  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	I1121 14:40:39.715037  269911 addons.go:530] duration metric: took 541.90535ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:40:39.974357  269911 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-989875" context rescaled to 1 replicas
	W1121 14:40:41.476592  269911 node_ready.go:57] node "auto-989875" has "Ready":"False" status (will retry)
	I1121 14:40:39.940709  280056 out.go:252] * Restarting existing docker container for "newest-cni-696683" ...
	I1121 14:40:39.940774  280056 cli_runner.go:164] Run: docker start newest-cni-696683
	I1121 14:40:40.204292  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:40.225047  280056 kic.go:430] container "newest-cni-696683" state is running.
	I1121 14:40:40.225352  280056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-696683
	I1121 14:40:40.245950  280056 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/config.json ...
	I1121 14:40:40.246193  280056 machine.go:94] provisionDockerMachine start ...
	I1121 14:40:40.246264  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:40.266155  280056 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:40.266469  280056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1121 14:40:40.266487  280056 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:40:40.267187  280056 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58902->127.0.0.1:33099: read: connection reset by peer
	I1121 14:40:43.397206  280056 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-696683
	
	I1121 14:40:43.397237  280056 ubuntu.go:182] provisioning hostname "newest-cni-696683"
	I1121 14:40:43.397300  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:43.416243  280056 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:43.416538  280056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1121 14:40:43.416568  280056 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-696683 && echo "newest-cni-696683" | sudo tee /etc/hostname
	I1121 14:40:43.552946  280056 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-696683
	
	I1121 14:40:43.553020  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:43.570469  280056 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:43.570726  280056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1121 14:40:43.570747  280056 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-696683' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-696683/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-696683' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:40:43.699459  280056 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:40:43.699487  280056 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11045/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11045/.minikube}
	I1121 14:40:43.699509  280056 ubuntu.go:190] setting up certificates
	I1121 14:40:43.699518  280056 provision.go:84] configureAuth start
	I1121 14:40:43.699572  280056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-696683
	I1121 14:40:43.716911  280056 provision.go:143] copyHostCerts
	I1121 14:40:43.716971  280056 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem, removing ...
	I1121 14:40:43.716988  280056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem
	I1121 14:40:43.717063  280056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem (1078 bytes)
	I1121 14:40:43.717170  280056 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem, removing ...
	I1121 14:40:43.717182  280056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem
	I1121 14:40:43.717225  280056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem (1123 bytes)
	I1121 14:40:43.717301  280056 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem, removing ...
	I1121 14:40:43.717311  280056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem
	I1121 14:40:43.717354  280056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem (1679 bytes)
	I1121 14:40:43.717424  280056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem org=jenkins.newest-cni-696683 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-696683]
	I1121 14:40:43.898083  280056 provision.go:177] copyRemoteCerts
	I1121 14:40:43.898146  280056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:40:43.898203  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:43.915505  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:44.009431  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:40:44.026983  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1121 14:40:44.043724  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 14:40:44.059839  280056 provision.go:87] duration metric: took 360.308976ms to configureAuth
	I1121 14:40:44.059858  280056 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:40:44.060029  280056 config.go:182] Loaded profile config "newest-cni-696683": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:44.060145  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:44.078061  280056 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:44.078262  280056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1121 14:40:44.078281  280056 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:40:44.359271  280056 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:40:44.359300  280056 machine.go:97] duration metric: took 4.113090842s to provisionDockerMachine
	I1121 14:40:44.359333  280056 start.go:293] postStartSetup for "newest-cni-696683" (driver="docker")
	I1121 14:40:44.359359  280056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:40:44.359441  280056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:40:44.359503  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:44.377727  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:44.471221  280056 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:40:44.474531  280056 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:40:44.474582  280056 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:40:44.474595  280056 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/addons for local assets ...
	I1121 14:40:44.474657  280056 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/files for local assets ...
	I1121 14:40:44.474769  280056 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem -> 145422.pem in /etc/ssl/certs
	I1121 14:40:44.474885  280056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:40:44.482193  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:40:44.498766  280056 start.go:296] duration metric: took 139.419384ms for postStartSetup
	I1121 14:40:44.498841  280056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:40:44.498885  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:44.516283  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:44.607254  280056 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:40:44.611996  280056 fix.go:56] duration metric: took 4.693269423s for fixHost
	I1121 14:40:44.612015  280056 start.go:83] releasing machines lock for "newest-cni-696683", held for 4.693312828s
	I1121 14:40:44.612074  280056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-696683
	I1121 14:40:44.629484  280056 ssh_runner.go:195] Run: cat /version.json
	I1121 14:40:44.629530  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:44.629596  280056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:40:44.629660  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:44.646651  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:44.647257  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	W1121 14:40:41.693191  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	W1121 14:40:43.693977  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	I1121 14:40:44.789269  280056 ssh_runner.go:195] Run: systemctl --version
	I1121 14:40:44.795157  280056 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:40:44.829469  280056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:40:44.833726  280056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:40:44.833770  280056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:40:44.841442  280056 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 14:40:44.841462  280056 start.go:496] detecting cgroup driver to use...
	I1121 14:40:44.841500  280056 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:40:44.841546  280056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:40:44.855704  280056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:40:44.867322  280056 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:40:44.867355  280056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:40:44.880286  280056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:40:44.891778  280056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:40:44.971173  280056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:40:45.053340  280056 docker.go:234] disabling docker service ...
	I1121 14:40:45.053430  280056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:40:45.066798  280056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:40:45.078751  280056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:40:45.158914  280056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:40:45.236074  280056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:40:45.247464  280056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:40:45.260830  280056 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 14:40:45.260881  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.268922  280056 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1121 14:40:45.268972  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.276871  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.284760  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.292909  280056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:40:45.300239  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.308497  280056 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.316091  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.324294  280056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:40:45.330973  280056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:40:45.337651  280056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:40:45.412162  280056 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:40:45.548953  280056 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:40:45.549022  280056 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:40:45.552808  280056 start.go:564] Will wait 60s for crictl version
	I1121 14:40:45.552866  280056 ssh_runner.go:195] Run: which crictl
	I1121 14:40:45.556653  280056 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:40:45.580611  280056 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:40:45.580686  280056 ssh_runner.go:195] Run: crio --version
	I1121 14:40:45.607820  280056 ssh_runner.go:195] Run: crio --version
	I1121 14:40:45.636081  280056 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 14:40:45.637049  280056 cli_runner.go:164] Run: docker network inspect newest-cni-696683 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:40:45.652698  280056 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:40:45.656512  280056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:40:45.667700  280056 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1121 14:40:45.668667  280056 kubeadm.go:884] updating cluster {Name:newest-cni-696683 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:40:45.668785  280056 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:40:45.668828  280056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:40:45.700321  280056 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:40:45.700343  280056 crio.go:433] Images already preloaded, skipping extraction
	I1121 14:40:45.700378  280056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:40:45.724113  280056 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:40:45.724131  280056 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:40:45.724139  280056 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1121 14:40:45.724223  280056 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-696683 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:40:45.724281  280056 ssh_runner.go:195] Run: crio config
	I1121 14:40:45.769317  280056 cni.go:84] Creating CNI manager for ""
	I1121 14:40:45.769335  280056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:40:45.769351  280056 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1121 14:40:45.769371  280056 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-696683 NodeName:newest-cni-696683 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:40:45.769497  280056 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-696683"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:40:45.769548  280056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:40:45.777468  280056 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:40:45.777525  280056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:40:45.785019  280056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1121 14:40:45.796834  280056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:40:45.808433  280056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1121 14:40:45.820149  280056 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:40:45.823519  280056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:40:45.832775  280056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:40:45.917710  280056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:40:45.942977  280056 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683 for IP: 192.168.85.2
	I1121 14:40:45.942996  280056 certs.go:195] generating shared ca certs ...
	I1121 14:40:45.943016  280056 certs.go:227] acquiring lock for ca certs: {Name:mkde3a7d6f17b238f06eab3a140993599f1b4367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:45.943143  280056 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key
	I1121 14:40:45.943197  280056 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key
	I1121 14:40:45.943209  280056 certs.go:257] generating profile certs ...
	I1121 14:40:45.943287  280056 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/client.key
	I1121 14:40:45.943338  280056 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.key.78303e51
	I1121 14:40:45.943372  280056 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/proxy-client.key
	I1121 14:40:45.943471  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem (1338 bytes)
	W1121 14:40:45.943505  280056 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542_empty.pem, impossibly tiny 0 bytes
	I1121 14:40:45.943516  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:40:45.943543  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:40:45.943582  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:40:45.943611  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem (1679 bytes)
	I1121 14:40:45.943651  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:40:45.944261  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:40:45.962656  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:40:45.981773  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:40:46.000183  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1121 14:40:46.026245  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 14:40:46.046663  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:40:46.062648  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:40:46.079837  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:40:46.096146  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:40:46.112465  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem --> /usr/share/ca-certificates/14542.pem (1338 bytes)
	I1121 14:40:46.128984  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /usr/share/ca-certificates/145422.pem (1708 bytes)
	I1121 14:40:46.145773  280056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:40:46.157581  280056 ssh_runner.go:195] Run: openssl version
	I1121 14:40:46.163196  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145422.pem && ln -fs /usr/share/ca-certificates/145422.pem /etc/ssl/certs/145422.pem"
	I1121 14:40:46.171390  280056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145422.pem
	I1121 14:40:46.174733  280056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145422.pem
	I1121 14:40:46.174777  280056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145422.pem
	I1121 14:40:46.211212  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145422.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:40:46.218830  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:40:46.226780  280056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:46.230239  280056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:46.230281  280056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:46.264064  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:40:46.271501  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14542.pem && ln -fs /usr/share/ca-certificates/14542.pem /etc/ssl/certs/14542.pem"
	I1121 14:40:46.279591  280056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14542.pem
	I1121 14:40:46.282952  280056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14542.pem
	I1121 14:40:46.282984  280056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14542.pem
	I1121 14:40:46.316214  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14542.pem /etc/ssl/certs/51391683.0"
	I1121 14:40:46.323317  280056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:40:46.327082  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 14:40:46.362145  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 14:40:46.397494  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 14:40:46.432068  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 14:40:46.476192  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 14:40:46.524752  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 14:40:46.572490  280056 kubeadm.go:401] StartCluster: {Name:newest-cni-696683 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:40:46.572631  280056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:40:46.572688  280056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:40:46.608977  280056 cri.go:89] found id: "15917aa53c8197587b8ebdb80d10a679b2a7abe6ff5a81b0d4f5a42900e02412"
	I1121 14:40:46.609002  280056 cri.go:89] found id: "76d7dc76ff36d3fb6387582649e9fc04ab3c1cbd059a19721997d005f5434abc"
	I1121 14:40:46.609007  280056 cri.go:89] found id: "ecf45bf1d37d86d0c9346ae4b4f597dfe7f80bbc47df49bb0994a548a8922b4b"
	I1121 14:40:46.609011  280056 cri.go:89] found id: "958b1593ef47f88f59d980553b03bdcf6b5f2c94efadd777421a6a497aa6ba37"
	I1121 14:40:46.609015  280056 cri.go:89] found id: ""
	I1121 14:40:46.609064  280056 ssh_runner.go:195] Run: sudo runc list -f json
	W1121 14:40:46.623457  280056 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:40:46Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:40:46.623543  280056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:40:46.631552  280056 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 14:40:46.631604  280056 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 14:40:46.631642  280056 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 14:40:46.639112  280056 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:40:46.640392  280056 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-696683" does not appear in /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:40:46.641294  280056 kubeconfig.go:62] /home/jenkins/minikube-integration/21847-11045/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-696683" cluster setting kubeconfig missing "newest-cni-696683" context setting]
	I1121 14:40:46.642656  280056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:46.644901  280056 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 14:40:46.652585  280056 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1121 14:40:46.652614  280056 kubeadm.go:602] duration metric: took 21.003413ms to restartPrimaryControlPlane
	I1121 14:40:46.652622  280056 kubeadm.go:403] duration metric: took 80.144736ms to StartCluster
	I1121 14:40:46.652645  280056 settings.go:142] acquiring lock: {Name:mkb207cf001a407898b2dbfd9fb9b3881f173a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:46.652695  280056 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:40:46.655150  280056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:46.655378  280056 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:40:46.655488  280056 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:40:46.655593  280056 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-696683"
	I1121 14:40:46.655610  280056 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-696683"
	W1121 14:40:46.655619  280056 addons.go:248] addon storage-provisioner should already be in state true
	I1121 14:40:46.655632  280056 config.go:182] Loaded profile config "newest-cni-696683": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:46.655645  280056 host.go:66] Checking if "newest-cni-696683" exists ...
	I1121 14:40:46.655665  280056 addons.go:70] Setting dashboard=true in profile "newest-cni-696683"
	I1121 14:40:46.655693  280056 addons.go:239] Setting addon dashboard=true in "newest-cni-696683"
	W1121 14:40:46.655703  280056 addons.go:248] addon dashboard should already be in state true
	I1121 14:40:46.655689  280056 addons.go:70] Setting default-storageclass=true in profile "newest-cni-696683"
	I1121 14:40:46.655739  280056 host.go:66] Checking if "newest-cni-696683" exists ...
	I1121 14:40:46.655746  280056 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-696683"
	I1121 14:40:46.656081  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:46.656134  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:46.656263  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:46.659699  280056 out.go:179] * Verifying Kubernetes components...
	I1121 14:40:46.660933  280056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:40:46.682004  280056 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1121 14:40:46.682004  280056 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:40:46.682500  280056 addons.go:239] Setting addon default-storageclass=true in "newest-cni-696683"
	W1121 14:40:46.682522  280056 addons.go:248] addon default-storageclass should already be in state true
	I1121 14:40:46.682547  280056 host.go:66] Checking if "newest-cni-696683" exists ...
	I1121 14:40:46.683001  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:46.686740  280056 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:46.686759  280056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:40:46.686806  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:46.688141  280056 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1121 14:40:43.976021  269911 node_ready.go:57] node "auto-989875" has "Ready":"False" status (will retry)
	W1121 14:40:45.976308  269911 node_ready.go:57] node "auto-989875" has "Ready":"False" status (will retry)
	I1121 14:40:46.689188  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1121 14:40:46.689209  280056 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1121 14:40:46.689271  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:46.713217  280056 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:46.713242  280056 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:40:46.713295  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:46.720516  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:46.724111  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:46.739551  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:46.802053  280056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:40:46.815536  280056 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:40:46.815609  280056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:40:46.828043  280056 api_server.go:72] duration metric: took 172.633997ms to wait for apiserver process to appear ...
	I1121 14:40:46.828064  280056 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:40:46.828080  280056 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:40:46.838809  280056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:46.840678  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1121 14:40:46.840695  280056 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1121 14:40:46.852409  280056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:46.856391  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1121 14:40:46.856410  280056 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1121 14:40:46.871966  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1121 14:40:46.871983  280056 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1121 14:40:46.887375  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1121 14:40:46.887424  280056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1121 14:40:46.902141  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1121 14:40:46.902162  280056 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1121 14:40:46.917178  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1121 14:40:46.917195  280056 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1121 14:40:46.930976  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1121 14:40:46.930993  280056 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1121 14:40:46.944066  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1121 14:40:46.944083  280056 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1121 14:40:46.956286  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1121 14:40:46.956305  280056 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1121 14:40:46.968997  280056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1121 14:40:48.462794  280056 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1121 14:40:48.462825  280056 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1121 14:40:48.462841  280056 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:40:48.469024  280056 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1121 14:40:48.469051  280056 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1121 14:40:48.829162  280056 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:40:48.834337  280056 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 14:40:48.834367  280056 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 14:40:48.963574  280056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.124616797s)
	I1121 14:40:48.963650  280056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.111184837s)
	I1121 14:40:48.963723  280056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.994699389s)
	I1121 14:40:48.965217  280056 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-696683 addons enable metrics-server
	
	I1121 14:40:48.973715  280056 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1121 14:40:48.974963  280056 addons.go:530] duration metric: took 2.319478862s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1121 14:40:49.328711  280056 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:40:49.333400  280056 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 14:40:49.333420  280056 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 14:40:49.829132  280056 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:40:49.833697  280056 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1121 14:40:49.834649  280056 api_server.go:141] control plane version: v1.34.1
	I1121 14:40:49.834670  280056 api_server.go:131] duration metric: took 3.006599871s to wait for apiserver health ...
	I1121 14:40:49.834678  280056 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:40:49.838227  280056 system_pods.go:59] 8 kube-system pods found
	I1121 14:40:49.838263  280056 system_pods.go:61] "coredns-66bc5c9577-ncl4f" [93a097a2-31da-4456-8435-e1a976f3d7f9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1121 14:40:49.838273  280056 system_pods.go:61] "etcd-newest-cni-696683" [113e31f1-f22b-4ed8-adcb-8c12d55e1f4c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 14:40:49.838286  280056 system_pods.go:61] "kindnet-m6v5n" [98b995f3-7968-4e19-abc1-10772001bd6c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1121 14:40:49.838301  280056 system_pods.go:61] "kube-apiserver-newest-cni-696683" [a046bba0-991c-4291-b89a-a0e64e3686b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 14:40:49.838311  280056 system_pods.go:61] "kube-controller-manager-newest-cni-696683" [dd3689f1-9ccf-4bca-8147-1779d92c3598] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 14:40:49.838318  280056 system_pods.go:61] "kube-proxy-2dkdg" [13ba7b82-bf92-4b76-a812-685c12ecb21c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1121 14:40:49.838331  280056 system_pods.go:61] "kube-scheduler-newest-cni-696683" [57fd312e-bc77-4ecb-9f3b-caa50247e033] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 14:40:49.838337  280056 system_pods.go:61] "storage-provisioner" [3cf44ed4-4cd8-4655-aef5-38415eb66de4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1121 14:40:49.838351  280056 system_pods.go:74] duration metric: took 3.666864ms to wait for pod list to return data ...
	I1121 14:40:49.838364  280056 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:40:49.840748  280056 default_sa.go:45] found service account: "default"
	I1121 14:40:49.840769  280056 default_sa.go:55] duration metric: took 2.395802ms for default service account to be created ...
	I1121 14:40:49.840783  280056 kubeadm.go:587] duration metric: took 3.185377365s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1121 14:40:49.840808  280056 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:40:49.842953  280056 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:40:49.842978  280056 node_conditions.go:123] node cpu capacity is 8
	I1121 14:40:49.842993  280056 node_conditions.go:105] duration metric: took 2.175119ms to run NodePressure ...
	I1121 14:40:49.843009  280056 start.go:242] waiting for startup goroutines ...
	I1121 14:40:49.843022  280056 start.go:247] waiting for cluster config update ...
	I1121 14:40:49.843039  280056 start.go:256] writing updated cluster config ...
	I1121 14:40:49.843325  280056 ssh_runner.go:195] Run: rm -f paused
	I1121 14:40:49.887622  280056 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:40:49.890008  280056 out.go:179] * Done! kubectl is now configured to use "newest-cni-696683" cluster and "default" namespace by default
	W1121 14:40:45.694185  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	W1121 14:40:47.694654  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	W1121 14:40:50.194324  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	W1121 14:40:48.477939  269911 node_ready.go:57] node "auto-989875" has "Ready":"False" status (will retry)
	I1121 14:40:50.487901  269911 node_ready.go:49] node "auto-989875" is "Ready"
	I1121 14:40:50.487937  269911 node_ready.go:38] duration metric: took 11.014560663s for node "auto-989875" to be "Ready" ...
	I1121 14:40:50.487951  269911 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:40:50.488000  269911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:40:50.506437  269911 api_server.go:72] duration metric: took 11.333890908s to wait for apiserver process to appear ...
	I1121 14:40:50.506462  269911 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:40:50.506481  269911 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1121 14:40:50.511381  269911 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1121 14:40:50.512348  269911 api_server.go:141] control plane version: v1.34.1
	I1121 14:40:50.512374  269911 api_server.go:131] duration metric: took 5.904455ms to wait for apiserver health ...
	I1121 14:40:50.512385  269911 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:40:50.516900  269911 system_pods.go:59] 8 kube-system pods found
	I1121 14:40:50.516933  269911 system_pods.go:61] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:50.516942  269911 system_pods.go:61] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:50.516954  269911 system_pods.go:61] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:50.516961  269911 system_pods.go:61] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:50.516969  269911 system_pods.go:61] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:50.516975  269911 system_pods.go:61] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:50.516983  269911 system_pods.go:61] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:50.516988  269911 system_pods.go:61] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Pending
	I1121 14:40:50.516995  269911 system_pods.go:74] duration metric: took 4.603561ms to wait for pod list to return data ...
	I1121 14:40:50.517018  269911 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:40:50.519971  269911 default_sa.go:45] found service account: "default"
	I1121 14:40:50.519990  269911 default_sa.go:55] duration metric: took 2.962898ms for default service account to be created ...
	I1121 14:40:50.520000  269911 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:40:50.523136  269911 system_pods.go:86] 8 kube-system pods found
	I1121 14:40:50.523178  269911 system_pods.go:89] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:50.523193  269911 system_pods.go:89] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:50.523202  269911 system_pods.go:89] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:50.523207  269911 system_pods.go:89] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:50.523212  269911 system_pods.go:89] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:50.523218  269911 system_pods.go:89] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:50.523222  269911 system_pods.go:89] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:50.523233  269911 system_pods.go:89] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Pending
	I1121 14:40:50.523254  269911 retry.go:31] will retry after 276.59635ms: missing components: kube-dns
	I1121 14:40:50.803782  269911 system_pods.go:86] 8 kube-system pods found
	I1121 14:40:50.803812  269911 system_pods.go:89] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:50.803820  269911 system_pods.go:89] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:50.803826  269911 system_pods.go:89] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:50.803830  269911 system_pods.go:89] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:50.803843  269911 system_pods.go:89] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:50.803847  269911 system_pods.go:89] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:50.803850  269911 system_pods.go:89] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:50.803854  269911 system_pods.go:89] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:40:50.803868  269911 retry.go:31] will retry after 254.453611ms: missing components: kube-dns
	I1121 14:40:51.063022  269911 system_pods.go:86] 8 kube-system pods found
	I1121 14:40:51.063048  269911 system_pods.go:89] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:51.063054  269911 system_pods.go:89] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:51.063060  269911 system_pods.go:89] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:51.063064  269911 system_pods.go:89] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:51.063070  269911 system_pods.go:89] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:51.063073  269911 system_pods.go:89] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:51.063076  269911 system_pods.go:89] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:51.063080  269911 system_pods.go:89] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:40:51.063093  269911 retry.go:31] will retry after 307.771212ms: missing components: kube-dns
	I1121 14:40:51.375222  269911 system_pods.go:86] 8 kube-system pods found
	I1121 14:40:51.375255  269911 system_pods.go:89] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:51.375268  269911 system_pods.go:89] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:51.375276  269911 system_pods.go:89] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:51.375282  269911 system_pods.go:89] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:51.375288  269911 system_pods.go:89] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:51.375299  269911 system_pods.go:89] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:51.375304  269911 system_pods.go:89] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:51.375315  269911 system_pods.go:89] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:40:51.375332  269911 retry.go:31] will retry after 408.234241ms: missing components: kube-dns
	I1121 14:40:51.790035  269911 system_pods.go:86] 8 kube-system pods found
	I1121 14:40:51.790067  269911 system_pods.go:89] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Running
	I1121 14:40:51.790076  269911 system_pods.go:89] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:51.790082  269911 system_pods.go:89] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:51.790088  269911 system_pods.go:89] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:51.790095  269911 system_pods.go:89] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:51.790101  269911 system_pods.go:89] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:51.790106  269911 system_pods.go:89] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:51.790111  269911 system_pods.go:89] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Running
	I1121 14:40:51.790124  269911 system_pods.go:126] duration metric: took 1.270114943s to wait for k8s-apps to be running ...
	I1121 14:40:51.790137  269911 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:40:51.790190  269911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:40:51.806346  269911 system_svc.go:56] duration metric: took 16.201575ms WaitForService to wait for kubelet
	I1121 14:40:51.806377  269911 kubeadm.go:587] duration metric: took 12.633833991s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:40:51.806402  269911 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:40:51.808958  269911 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:40:51.808980  269911 node_conditions.go:123] node cpu capacity is 8
	I1121 14:40:51.808992  269911 node_conditions.go:105] duration metric: took 2.584392ms to run NodePressure ...
	I1121 14:40:51.809003  269911 start.go:242] waiting for startup goroutines ...
	I1121 14:40:51.809009  269911 start.go:247] waiting for cluster config update ...
	I1121 14:40:51.809019  269911 start.go:256] writing updated cluster config ...
	I1121 14:40:51.809271  269911 ssh_runner.go:195] Run: rm -f paused
	I1121 14:40:51.812826  269911 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:40:51.816346  269911 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r6m4z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.820311  269911 pod_ready.go:94] pod "coredns-66bc5c9577-r6m4z" is "Ready"
	I1121 14:40:51.820332  269911 pod_ready.go:86] duration metric: took 3.96803ms for pod "coredns-66bc5c9577-r6m4z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.822259  269911 pod_ready.go:83] waiting for pod "etcd-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.826005  269911 pod_ready.go:94] pod "etcd-auto-989875" is "Ready"
	I1121 14:40:51.826024  269911 pod_ready.go:86] duration metric: took 3.74738ms for pod "etcd-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.827872  269911 pod_ready.go:83] waiting for pod "kube-apiserver-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.831284  269911 pod_ready.go:94] pod "kube-apiserver-auto-989875" is "Ready"
	I1121 14:40:51.831303  269911 pod_ready.go:86] duration metric: took 3.411512ms for pod "kube-apiserver-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.833002  269911 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:52.217619  269911 pod_ready.go:94] pod "kube-controller-manager-auto-989875" is "Ready"
	I1121 14:40:52.217641  269911 pod_ready.go:86] duration metric: took 384.619243ms for pod "kube-controller-manager-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:52.417771  269911 pod_ready.go:83] waiting for pod "kube-proxy-ttpnr" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:52.816803  269911 pod_ready.go:94] pod "kube-proxy-ttpnr" is "Ready"
	I1121 14:40:52.816827  269911 pod_ready.go:86] duration metric: took 399.031224ms for pod "kube-proxy-ttpnr" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:53.017303  269911 pod_ready.go:83] waiting for pod "kube-scheduler-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:53.417158  269911 pod_ready.go:94] pod "kube-scheduler-auto-989875" is "Ready"
	I1121 14:40:53.417179  269911 pod_ready.go:86] duration metric: took 399.853474ms for pod "kube-scheduler-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:53.417190  269911 pod_ready.go:40] duration metric: took 1.604337241s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:40:53.465649  269911 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:40:53.467062  269911 out.go:179] * Done! kubectl is now configured to use "auto-989875" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.746164119Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.746186547Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.746206549Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.746330546Z" level=info msg="Created container 1c1be146e89e0ac17f82a731250519d13aa441f10efd72b55f37ffd6a8766f48: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hp5ll/kubernetes-dashboard" id=13ef77b8-7a30-46c4-94e8-5791575a7472 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.746882165Z" level=info msg="Starting container: 1c1be146e89e0ac17f82a731250519d13aa441f10efd72b55f37ffd6a8766f48" id=a80c3d77-e52a-4255-ad0d-0c5c5f6da099 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.748765743Z" level=info msg="Started container" PID=1657 containerID=1c1be146e89e0ac17f82a731250519d13aa441f10efd72b55f37ffd6a8766f48 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hp5ll/kubernetes-dashboard id=a80c3d77-e52a-4255-ad0d-0c5c5f6da099 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5014c5412846f5a852cccc0619ef0a487f1f114c8c0a43f0fe51d304a08cf54f
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.750089913Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.750112762Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.750134778Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.754230158Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.754250842Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.754266665Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.757688604Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.757713766Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:40:45 embed-certs-441390 crio[558]: time="2025-11-21T14:40:45.860428459Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=68b88f20-36c7-42b7-9b77-309d6774850f name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:40:45 embed-certs-441390 crio[558]: time="2025-11-21T14:40:45.865030409Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f39b62d5-2591-45d1-a605-cda1ef566e9c name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:40:45 embed-certs-441390 crio[558]: time="2025-11-21T14:40:45.868488197Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9hgv/dashboard-metrics-scraper" id=70a9d142-8d41-4ab3-b3b8-c8b754451ade name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:40:45 embed-certs-441390 crio[558]: time="2025-11-21T14:40:45.868652678Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:45 embed-certs-441390 crio[558]: time="2025-11-21T14:40:45.875868163Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:45 embed-certs-441390 crio[558]: time="2025-11-21T14:40:45.876386059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:45 embed-certs-441390 crio[558]: time="2025-11-21T14:40:45.905728764Z" level=info msg="Created container ae9c6391b9097ef248f2a7247fad69dec5e3f671efd813eb51b9341624a3d3d5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9hgv/dashboard-metrics-scraper" id=70a9d142-8d41-4ab3-b3b8-c8b754451ade name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:40:45 embed-certs-441390 crio[558]: time="2025-11-21T14:40:45.906265302Z" level=info msg="Starting container: ae9c6391b9097ef248f2a7247fad69dec5e3f671efd813eb51b9341624a3d3d5" id=6d0b38ee-683c-4af2-9a32-17096fa977bb name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:40:45 embed-certs-441390 crio[558]: time="2025-11-21T14:40:45.908060259Z" level=info msg="Started container" PID=1758 containerID=ae9c6391b9097ef248f2a7247fad69dec5e3f671efd813eb51b9341624a3d3d5 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9hgv/dashboard-metrics-scraper id=6d0b38ee-683c-4af2-9a32-17096fa977bb name=/runtime.v1.RuntimeService/StartContainer sandboxID=01ddbbee4dcdcf0488062e173c82117348aa3dfc63bed60deddbdb50caa395de
	Nov 21 14:40:46 embed-certs-441390 crio[558]: time="2025-11-21T14:40:46.016932487Z" level=info msg="Removing container: 0052b2528fbb8a39a9cd0cbb4770df81185fc9bdb90d8106906ed8acb4863030" id=c541b19f-b6cf-4a05-b6f5-c418ffc3ecc9 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 14:40:46 embed-certs-441390 crio[558]: time="2025-11-21T14:40:46.031938806Z" level=info msg="Removed container 0052b2528fbb8a39a9cd0cbb4770df81185fc9bdb90d8106906ed8acb4863030: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9hgv/dashboard-metrics-scraper" id=c541b19f-b6cf-4a05-b6f5-c418ffc3ecc9 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	ae9c6391b9097       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago       Exited              dashboard-metrics-scraper   2                   01ddbbee4dcdc       dashboard-metrics-scraper-6ffb444bf9-s9hgv   kubernetes-dashboard
	1c1be146e89e0       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   22 seconds ago      Running             kubernetes-dashboard        0                   5014c5412846f       kubernetes-dashboard-855c9754f9-hp5ll        kubernetes-dashboard
	95767056d7b10       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           32 seconds ago      Running             coredns                     0                   d3c120471293f       coredns-66bc5c9577-sbjhs                     kube-system
	1f4f5f406d42a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           32 seconds ago      Running             busybox                     1                   b37906500ef0a       busybox                                      default
	fa41668278e81       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           32 seconds ago      Exited              storage-provisioner         0                   d047aba838c92       storage-provisioner                          kube-system
	b3a0a42501ce1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           32 seconds ago      Running             kindnet-cni                 0                   1a741fa733a71       kindnet-pg6qj                                kube-system
	81750ffcd002e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           32 seconds ago      Running             kube-proxy                  0                   2e36f3f9a3334       kube-proxy-m2nzt                             kube-system
	d77dc478ac3af       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           35 seconds ago      Running             etcd                        0                   a14a1a9a6c820       etcd-embed-certs-441390                      kube-system
	26a9f97051884       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           35 seconds ago      Running             kube-controller-manager     0                   0884898cba1d3       kube-controller-manager-embed-certs-441390   kube-system
	89526413a6f74       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           35 seconds ago      Running             kube-apiserver              0                   e9c9066811e1e       kube-apiserver-embed-certs-441390            kube-system
	0760d88ca4d91       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           35 seconds ago      Running             kube-scheduler              0                   5eba25b27e5af       kube-scheduler-embed-certs-441390            kube-system
	
	
	==> coredns [95767056d7b108c98614d1dd0610b157001ab0f1932ef578fc0f0e6fdf7a90bb] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33037 - 18880 "HINFO IN 4834428811184416202.4769941339570836985. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.094540512s
	
	
	==> describe nodes <==
	Name:               embed-certs-441390
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-441390
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=embed-certs-441390
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_39_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:39:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-441390
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:40:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:40:21 +0000   Fri, 21 Nov 2025 14:39:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:40:21 +0000   Fri, 21 Nov 2025 14:39:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:40:21 +0000   Fri, 21 Nov 2025 14:39:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:40:21 +0000   Fri, 21 Nov 2025 14:39:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-441390
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                f6f8f703-6de7-4044-b431-06d9e8823119
	  Boot ID:                    4deed74e-d1ae-403f-b303-7338c681df31
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 coredns-66bc5c9577-sbjhs                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     86s
	  kube-system                 etcd-embed-certs-441390                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         91s
	  kube-system                 kindnet-pg6qj                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      86s
	  kube-system                 kube-apiserver-embed-certs-441390             250m (3%)     0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-controller-manager-embed-certs-441390    200m (2%)     0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-proxy-m2nzt                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-scheduler-embed-certs-441390             100m (1%)     0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-s9hgv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hp5ll         0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 84s                kube-proxy       
	  Normal  Starting                 32s                kube-proxy       
	  Normal  Starting                 96s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  96s (x8 over 96s)  kubelet          Node embed-certs-441390 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s (x8 over 96s)  kubelet          Node embed-certs-441390 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s (x8 over 96s)  kubelet          Node embed-certs-441390 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    91s                kubelet          Node embed-certs-441390 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  91s                kubelet          Node embed-certs-441390 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     91s                kubelet          Node embed-certs-441390 status is now: NodeHasSufficientPID
	  Normal  Starting                 91s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           87s                node-controller  Node embed-certs-441390 event: Registered Node embed-certs-441390 in Controller
	  Normal  NodeReady                74s                kubelet          Node embed-certs-441390 status is now: NodeReady
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node embed-certs-441390 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node embed-certs-441390 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x8 over 37s)  kubelet          Node embed-certs-441390 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node embed-certs-441390 event: Registered Node embed-certs-441390 in Controller
	
	
	==> dmesg <==
	[  +0.087005] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024870] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.407014] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 13:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.052324] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023964] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +2.047715] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +4.031557] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +8.511113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[ +16.382337] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[Nov21 13:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	
	
	==> etcd [d77dc478ac3af84688fd0b17964f2958adba526b379a9beee309ee2ce20ef8ab] <==
	{"level":"warn","ts":"2025-11-21T14:40:20.887395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.905596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.918164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.928115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.936831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.944593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.951989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.961604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.969477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.978745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.984530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.995192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.000215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.007860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.015346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.022117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.030006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.037418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.044920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.053392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.064712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.082438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.089463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.096288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.171148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50698","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:40:55 up  1:23,  0 user,  load average: 4.40, 3.04, 1.94
	Linux embed-certs-441390 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b3a0a42501ce1cf520bdc52efb98a024759ca435ce8b9848519add262264914a] <==
	I1121 14:40:22.430933       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:40:22.431287       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1121 14:40:22.431413       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:40:22.431429       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:40:22.431454       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:40:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:40:22.732973       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:40:22.732997       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:40:22.733009       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:40:22.733643       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:40:23.033736       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:40:23.033771       1 metrics.go:72] Registering metrics
	I1121 14:40:23.130934       1 controller.go:711] "Syncing nftables rules"
	I1121 14:40:32.733184       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1121 14:40:32.733241       1 main.go:301] handling current node
	I1121 14:40:42.734634       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1121 14:40:42.734678       1 main.go:301] handling current node
	I1121 14:40:52.737634       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1121 14:40:52.737663       1 main.go:301] handling current node
	
	
	==> kube-apiserver [89526413a6f7420bdb1189dd04428b799769f4b2d2b5cbf920e078cac420b1ac] <==
	I1121 14:40:21.871772       1 autoregister_controller.go:144] Starting autoregister controller
	I1121 14:40:21.871803       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:40:21.871828       1 cache.go:39] Caches are synced for autoregister controller
	I1121 14:40:21.872087       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 14:40:21.919834       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1121 14:40:21.919880       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1121 14:40:21.919892       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1121 14:40:21.919954       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1121 14:40:21.921021       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:40:21.921888       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1121 14:40:21.922786       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1121 14:40:21.922860       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:40:21.937215       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:40:21.943621       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 14:40:21.956627       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:40:22.287199       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 14:40:22.317847       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:40:22.337543       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:40:22.347751       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:40:22.389238       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.50.240"}
	I1121 14:40:22.399433       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.2.118"}
	I1121 14:40:22.757926       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:40:25.209606       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:40:25.608357       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:40:25.707761       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [26a9f970518848d1b900fdd0d942efb823d83d328447dae842211d42697b5a1a] <==
	I1121 14:40:25.207991       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:40:25.208818       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 14:40:25.208841       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 14:40:25.210329       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 14:40:25.211247       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 14:40:25.211406       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1121 14:40:25.211507       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1121 14:40:25.211918       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1121 14:40:25.211933       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1121 14:40:25.211941       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1121 14:40:25.212623       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1121 14:40:25.212667       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 14:40:25.213601       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1121 14:40:25.213695       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1121 14:40:25.215128       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1121 14:40:25.215735       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:40:25.215748       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 14:40:25.216214       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 14:40:25.220425       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 14:40:25.241686       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1121 14:40:25.241712       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:40:25.246002       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1121 14:40:25.246117       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 14:40:25.246302       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-441390"
	I1121 14:40:25.246422       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [81750ffcd002e880aab0fc94c97b3c53e0c4bfc576f3abd469310642ac74e31c] <==
	I1121 14:40:22.262465       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:40:22.333825       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:40:22.435189       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:40:22.435837       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1121 14:40:22.435988       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:40:22.459330       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:40:22.459444       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:40:22.466267       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:40:22.466762       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:40:22.466800       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:40:22.472808       1 config.go:200] "Starting service config controller"
	I1121 14:40:22.476496       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:40:22.472897       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:40:22.473430       1 config.go:309] "Starting node config controller"
	I1121 14:40:22.476632       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:40:22.476657       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:40:22.476345       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:40:22.476781       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:40:22.476829       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:40:22.576615       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:40:22.577809       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:40:22.577915       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0760d88ca4d91b1009c8a72ad47ebcb1d7dd0be3b46f0aa30647c629d08bd762] <==
	I1121 14:40:22.043650       1 serving.go:386] Generated self-signed cert in-memory
	I1121 14:40:23.216661       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 14:40:23.216698       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:40:23.221924       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1121 14:40:23.221965       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1121 14:40:23.222126       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 14:40:23.222152       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 14:40:23.222193       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:40:23.222207       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:40:23.222536       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 14:40:23.222715       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 14:40:23.322152       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1121 14:40:23.322212       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 14:40:23.322255       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:40:22 embed-certs-441390 kubelet[715]: I1121 14:40:22.004004     715 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-embed-certs-441390"
	Nov 21 14:40:22 embed-certs-441390 kubelet[715]: E1121 14:40:22.010472     715 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-embed-certs-441390\" already exists" pod="kube-system/kube-apiserver-embed-certs-441390"
	Nov 21 14:40:25 embed-certs-441390 kubelet[715]: I1121 14:40:25.858390     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fa58c7e1-02da-426d-a23c-4e127db4c9ae-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-hp5ll\" (UID: \"fa58c7e1-02da-426d-a23c-4e127db4c9ae\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hp5ll"
	Nov 21 14:40:25 embed-certs-441390 kubelet[715]: I1121 14:40:25.858438     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl8j6\" (UniqueName: \"kubernetes.io/projected/fa58c7e1-02da-426d-a23c-4e127db4c9ae-kube-api-access-nl8j6\") pod \"kubernetes-dashboard-855c9754f9-hp5ll\" (UID: \"fa58c7e1-02da-426d-a23c-4e127db4c9ae\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hp5ll"
	Nov 21 14:40:25 embed-certs-441390 kubelet[715]: I1121 14:40:25.858457     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-s9hgv\" (UID: \"c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9hgv"
	Nov 21 14:40:25 embed-certs-441390 kubelet[715]: I1121 14:40:25.858475     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcqg8\" (UniqueName: \"kubernetes.io/projected/c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549-kube-api-access-xcqg8\") pod \"dashboard-metrics-scraper-6ffb444bf9-s9hgv\" (UID: \"c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9hgv"
	Nov 21 14:40:28 embed-certs-441390 kubelet[715]: I1121 14:40:28.959334     715 scope.go:117] "RemoveContainer" containerID="af98fdc8a4cda40b6ccb9295f6dad35985047e76a0f94f249e21d033e9f4ad01"
	Nov 21 14:40:29 embed-certs-441390 kubelet[715]: I1121 14:40:29.840925     715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 21 14:40:29 embed-certs-441390 kubelet[715]: I1121 14:40:29.970899     715 scope.go:117] "RemoveContainer" containerID="0052b2528fbb8a39a9cd0cbb4770df81185fc9bdb90d8106906ed8acb4863030"
	Nov 21 14:40:29 embed-certs-441390 kubelet[715]: E1121 14:40:29.971067     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s9hgv_kubernetes-dashboard(c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9hgv" podUID="c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549"
	Nov 21 14:40:29 embed-certs-441390 kubelet[715]: I1121 14:40:29.971161     715 scope.go:117] "RemoveContainer" containerID="af98fdc8a4cda40b6ccb9295f6dad35985047e76a0f94f249e21d033e9f4ad01"
	Nov 21 14:40:30 embed-certs-441390 kubelet[715]: I1121 14:40:30.975068     715 scope.go:117] "RemoveContainer" containerID="0052b2528fbb8a39a9cd0cbb4770df81185fc9bdb90d8106906ed8acb4863030"
	Nov 21 14:40:30 embed-certs-441390 kubelet[715]: E1121 14:40:30.975275     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s9hgv_kubernetes-dashboard(c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9hgv" podUID="c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549"
	Nov 21 14:40:32 embed-certs-441390 kubelet[715]: I1121 14:40:32.993018     715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hp5ll" podStartSLOduration=1.387856628 podStartE2EDuration="7.992998158s" podCreationTimestamp="2025-11-21 14:40:25 +0000 UTC" firstStartedPulling="2025-11-21 14:40:26.10372327 +0000 UTC m=+7.389673288" lastFinishedPulling="2025-11-21 14:40:32.708864799 +0000 UTC m=+13.994814818" observedRunningTime="2025-11-21 14:40:32.992920464 +0000 UTC m=+14.278870503" watchObservedRunningTime="2025-11-21 14:40:32.992998158 +0000 UTC m=+14.278948198"
	Nov 21 14:40:35 embed-certs-441390 kubelet[715]: I1121 14:40:35.553261     715 scope.go:117] "RemoveContainer" containerID="0052b2528fbb8a39a9cd0cbb4770df81185fc9bdb90d8106906ed8acb4863030"
	Nov 21 14:40:35 embed-certs-441390 kubelet[715]: E1121 14:40:35.553422     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s9hgv_kubernetes-dashboard(c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9hgv" podUID="c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549"
	Nov 21 14:40:45 embed-certs-441390 kubelet[715]: I1121 14:40:45.859869     715 scope.go:117] "RemoveContainer" containerID="0052b2528fbb8a39a9cd0cbb4770df81185fc9bdb90d8106906ed8acb4863030"
	Nov 21 14:40:46 embed-certs-441390 kubelet[715]: I1121 14:40:46.015597     715 scope.go:117] "RemoveContainer" containerID="0052b2528fbb8a39a9cd0cbb4770df81185fc9bdb90d8106906ed8acb4863030"
	Nov 21 14:40:46 embed-certs-441390 kubelet[715]: I1121 14:40:46.015840     715 scope.go:117] "RemoveContainer" containerID="ae9c6391b9097ef248f2a7247fad69dec5e3f671efd813eb51b9341624a3d3d5"
	Nov 21 14:40:46 embed-certs-441390 kubelet[715]: E1121 14:40:46.016054     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s9hgv_kubernetes-dashboard(c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9hgv" podUID="c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549"
	Nov 21 14:40:51 embed-certs-441390 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 14:40:51 embed-certs-441390 kubelet[715]: I1121 14:40:51.639665     715 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 21 14:40:51 embed-certs-441390 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 14:40:51 embed-certs-441390 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 21 14:40:51 embed-certs-441390 systemd[1]: kubelet.service: Consumed 1.177s CPU time.
	
	
	==> kubernetes-dashboard [1c1be146e89e0ac17f82a731250519d13aa441f10efd72b55f37ffd6a8766f48] <==
	2025/11/21 14:40:32 Starting overwatch
	2025/11/21 14:40:32 Using namespace: kubernetes-dashboard
	2025/11/21 14:40:32 Using in-cluster config to connect to apiserver
	2025/11/21 14:40:32 Using secret token for csrf signing
	2025/11/21 14:40:32 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/21 14:40:32 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/21 14:40:32 Successful initial request to the apiserver, version: v1.34.1
	2025/11/21 14:40:32 Generating JWE encryption key
	2025/11/21 14:40:32 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/21 14:40:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/21 14:40:32 Initializing JWE encryption key from synchronized object
	2025/11/21 14:40:32 Creating in-cluster Sidecar client
	2025/11/21 14:40:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 14:40:32 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [fa41668278e8170f60e1dccd430711a3eed075f795e571c30cc83710f2742a90] <==
	I1121 14:40:22.240069       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1121 14:40:52.243070       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-441390 -n embed-certs-441390
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-441390 -n embed-certs-441390: exit status 2 (378.776224ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-441390 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-441390
helpers_test.go:243: (dbg) docker inspect embed-certs-441390:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0ce231a2efd9649e52b4171740e54a8824b68b993c22eaf04e41cdad5d399c78",
	        "Created": "2025-11-21T14:39:07.796898766Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272230,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:40:12.324804064Z",
	            "FinishedAt": "2025-11-21T14:40:11.144402786Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/0ce231a2efd9649e52b4171740e54a8824b68b993c22eaf04e41cdad5d399c78/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0ce231a2efd9649e52b4171740e54a8824b68b993c22eaf04e41cdad5d399c78/hostname",
	        "HostsPath": "/var/lib/docker/containers/0ce231a2efd9649e52b4171740e54a8824b68b993c22eaf04e41cdad5d399c78/hosts",
	        "LogPath": "/var/lib/docker/containers/0ce231a2efd9649e52b4171740e54a8824b68b993c22eaf04e41cdad5d399c78/0ce231a2efd9649e52b4171740e54a8824b68b993c22eaf04e41cdad5d399c78-json.log",
	        "Name": "/embed-certs-441390",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-441390:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-441390",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0ce231a2efd9649e52b4171740e54a8824b68b993c22eaf04e41cdad5d399c78",
	                "LowerDir": "/var/lib/docker/overlay2/600fd769bdab16b7cfa0c469ccebb67ba68133c5b4bce708cd3a08511bd496b4-init/diff:/var/lib/docker/overlay2/52a35d389e2d282a009a54c52e3f2c3f22a4d7ab5a2644a29d16127d21682576/diff",
	                "MergedDir": "/var/lib/docker/overlay2/600fd769bdab16b7cfa0c469ccebb67ba68133c5b4bce708cd3a08511bd496b4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/600fd769bdab16b7cfa0c469ccebb67ba68133c5b4bce708cd3a08511bd496b4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/600fd769bdab16b7cfa0c469ccebb67ba68133c5b4bce708cd3a08511bd496b4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-441390",
	                "Source": "/var/lib/docker/volumes/embed-certs-441390/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-441390",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-441390",
	                "name.minikube.sigs.k8s.io": "embed-certs-441390",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1616feb3cbffb189fbe9d18492a128fe43525dd11d97b9987610b1e0b6cff695",
	            "SandboxKey": "/var/run/docker/netns/1616feb3cbff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-441390": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e6dc762b4b87807c44de5ce5e6dedcc7963047110765e9594324098021783415",
	                    "EndpointID": "de560efd9ad64dbe7258497f23f60d9b61d0a87aca5fe5e3ff1cc4ca4e688908",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "0e:4b:8c:31:ec:89",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-441390",
	                        "0ce231a2efd9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-441390 -n embed-certs-441390
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-441390 -n embed-certs-441390: exit status 2 (337.118547ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-441390 logs -n 25
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-441390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ image   │ no-preload-589411 image list --format=json                                                                                                                                                                                                    │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:39 UTC │
	│ pause   │ -p no-preload-589411 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │                     │
	│ stop    │ -p embed-certs-441390 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p cert-expiration-046125 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-046125       │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:40 UTC │
	│ delete  │ -p kubernetes-upgrade-214044                                                                                                                                                                                                                  │ kubernetes-upgrade-214044    │ jenkins │ v1.37.0 │ 21 Nov 25 14:39 UTC │ 21 Nov 25 14:40 UTC │
	│ delete  │ -p disable-driver-mounts-708207                                                                                                                                                                                                               │ disable-driver-mounts-708207 │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p default-k8s-diff-port-859276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-859276 │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	│ delete  │ -p no-preload-589411                                                                                                                                                                                                                          │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ delete  │ -p cert-expiration-046125                                                                                                                                                                                                                     │ cert-expiration-046125       │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ delete  │ -p no-preload-589411                                                                                                                                                                                                                          │ no-preload-589411            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p newest-cni-696683 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p auto-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-441390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p embed-certs-441390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ addons  │ enable metrics-server -p newest-cni-696683 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	│ stop    │ -p newest-cni-696683 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ addons  │ enable dashboard -p newest-cni-696683 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ start   │ -p newest-cni-696683 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ image   │ newest-cni-696683 image list --format=json                                                                                                                                                                                                    │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ pause   │ -p newest-cni-696683 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	│ image   │ embed-certs-441390 image list --format=json                                                                                                                                                                                                   │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ pause   │ -p embed-certs-441390 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-441390           │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	│ ssh     │ -p auto-989875 pgrep -a kubelet                                                                                                                                                                                                               │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │ 21 Nov 25 14:40 UTC │
	│ delete  │ -p newest-cni-696683                                                                                                                                                                                                                          │ newest-cni-696683            │ jenkins │ v1.37.0 │ 21 Nov 25 14:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:40:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:40:39.698658  280056 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:40:39.699061  280056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:40:39.699072  280056 out.go:374] Setting ErrFile to fd 2...
	I1121 14:40:39.699078  280056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:40:39.699382  280056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:40:39.700016  280056 out.go:368] Setting JSON to false
	I1121 14:40:39.701601  280056 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4989,"bootTime":1763731051,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:40:39.701719  280056 start.go:143] virtualization: kvm guest
	I1121 14:40:39.703395  280056 out.go:179] * [newest-cni-696683] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:40:39.705010  280056 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:40:39.705085  280056 notify.go:221] Checking for updates...
	I1121 14:40:39.709543  280056 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:40:39.710889  280056 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:40:39.711654  280056 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 14:40:39.712608  280056 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:40:39.202164  269911 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:39.202179  269911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:40:39.202225  269911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989875
	I1121 14:40:39.203262  269911 addons.go:239] Setting addon default-storageclass=true in "auto-989875"
	I1121 14:40:39.203302  269911 host.go:66] Checking if "auto-989875" exists ...
	I1121 14:40:39.203757  269911 cli_runner.go:164] Run: docker container inspect auto-989875 --format={{.State.Status}}
	I1121 14:40:39.231112  269911 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:39.231135  269911 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:40:39.231188  269911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989875
	I1121 14:40:39.231351  269911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/auto-989875/id_rsa Username:docker}
	I1121 14:40:39.253202  269911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/auto-989875/id_rsa Username:docker}
	I1121 14:40:39.264883  269911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:40:39.321852  269911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:40:39.368717  269911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:39.374920  269911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:39.469946  269911 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1121 14:40:39.473299  269911 node_ready.go:35] waiting up to 15m0s for node "auto-989875" to be "Ready" ...
	I1121 14:40:39.713942  269911 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 14:40:39.714030  280056 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:40:39.715503  280056 config.go:182] Loaded profile config "newest-cni-696683": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:39.715983  280056 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:40:39.743756  280056 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:40:39.743915  280056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:40:39.815430  280056 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:40:39.803326466 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:40:39.815546  280056 docker.go:319] overlay module found
	I1121 14:40:39.816656  280056 out.go:179] * Using the docker driver based on existing profile
	I1121 14:40:39.817754  280056 start.go:309] selected driver: docker
	I1121 14:40:39.817774  280056 start.go:930] validating driver "docker" against &{Name:newest-cni-696683 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:40:39.817892  280056 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:40:39.818542  280056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:40:39.891888  280056 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:40:39.880844572 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:40:39.892243  280056 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1121 14:40:39.892278  280056 cni.go:84] Creating CNI manager for ""
	I1121 14:40:39.892328  280056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:40:39.892373  280056 start.go:353] cluster config:
	{Name:newest-cni-696683 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:40:39.894009  280056 out.go:179] * Starting "newest-cni-696683" primary control-plane node in "newest-cni-696683" cluster
	I1121 14:40:39.894992  280056 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:40:39.895975  280056 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:40:39.896900  280056 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:40:39.896944  280056 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1121 14:40:39.896959  280056 cache.go:65] Caching tarball of preloaded images
	I1121 14:40:39.896995  280056 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:40:39.897060  280056 preload.go:238] Found /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1121 14:40:39.897075  280056 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 14:40:39.897184  280056 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/config.json ...
	I1121 14:40:39.918549  280056 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:40:39.918582  280056 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:40:39.918603  280056 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:40:39.918629  280056 start.go:360] acquireMachinesLock for newest-cni-696683: {Name:mk685873e16cf8d4315d67b3bf50f89f3c32618f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:40:39.918691  280056 start.go:364] duration metric: took 39.301µs to acquireMachinesLock for "newest-cni-696683"
	I1121 14:40:39.918713  280056 start.go:96] Skipping create...Using existing machine configuration
	I1121 14:40:39.918723  280056 fix.go:54] fixHost starting: 
	I1121 14:40:39.918941  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:39.939232  280056 fix.go:112] recreateIfNeeded on newest-cni-696683: state=Stopped err=<nil>
	W1121 14:40:39.939257  280056 fix.go:138] unexpected machine state, will restart: <nil>
	W1121 14:40:37.195055  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	W1121 14:40:39.196240  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	I1121 14:40:39.715037  269911 addons.go:530] duration metric: took 541.90535ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 14:40:39.974357  269911 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-989875" context rescaled to 1 replicas
	W1121 14:40:41.476592  269911 node_ready.go:57] node "auto-989875" has "Ready":"False" status (will retry)
	I1121 14:40:39.940709  280056 out.go:252] * Restarting existing docker container for "newest-cni-696683" ...
	I1121 14:40:39.940774  280056 cli_runner.go:164] Run: docker start newest-cni-696683
	I1121 14:40:40.204292  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:40.225047  280056 kic.go:430] container "newest-cni-696683" state is running.
	I1121 14:40:40.225352  280056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-696683
	I1121 14:40:40.245950  280056 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/config.json ...
	I1121 14:40:40.246193  280056 machine.go:94] provisionDockerMachine start ...
	I1121 14:40:40.246264  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:40.266155  280056 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:40.266469  280056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1121 14:40:40.266487  280056 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 14:40:40.267187  280056 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58902->127.0.0.1:33099: read: connection reset by peer
	I1121 14:40:43.397206  280056 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-696683
	
	I1121 14:40:43.397237  280056 ubuntu.go:182] provisioning hostname "newest-cni-696683"
	I1121 14:40:43.397300  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:43.416243  280056 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:43.416538  280056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1121 14:40:43.416568  280056 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-696683 && echo "newest-cni-696683" | sudo tee /etc/hostname
	I1121 14:40:43.552946  280056 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-696683
	
	I1121 14:40:43.553020  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:43.570469  280056 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:43.570726  280056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1121 14:40:43.570747  280056 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-696683' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-696683/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-696683' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 14:40:43.699459  280056 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 14:40:43.699487  280056 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21847-11045/.minikube CaCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21847-11045/.minikube}
	I1121 14:40:43.699509  280056 ubuntu.go:190] setting up certificates
	I1121 14:40:43.699518  280056 provision.go:84] configureAuth start
	I1121 14:40:43.699572  280056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-696683
	I1121 14:40:43.716911  280056 provision.go:143] copyHostCerts
	I1121 14:40:43.716971  280056 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem, removing ...
	I1121 14:40:43.716988  280056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem
	I1121 14:40:43.717063  280056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/ca.pem (1078 bytes)
	I1121 14:40:43.717170  280056 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem, removing ...
	I1121 14:40:43.717182  280056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem
	I1121 14:40:43.717225  280056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/cert.pem (1123 bytes)
	I1121 14:40:43.717301  280056 exec_runner.go:144] found /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem, removing ...
	I1121 14:40:43.717311  280056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem
	I1121 14:40:43.717354  280056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21847-11045/.minikube/key.pem (1679 bytes)
	I1121 14:40:43.717424  280056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem org=jenkins.newest-cni-696683 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-696683]
	I1121 14:40:43.898083  280056 provision.go:177] copyRemoteCerts
	I1121 14:40:43.898146  280056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 14:40:43.898203  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:43.915505  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:44.009431  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 14:40:44.026983  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1121 14:40:44.043724  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 14:40:44.059839  280056 provision.go:87] duration metric: took 360.308976ms to configureAuth
	I1121 14:40:44.059858  280056 ubuntu.go:206] setting minikube options for container-runtime
	I1121 14:40:44.060029  280056 config.go:182] Loaded profile config "newest-cni-696683": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:44.060145  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:44.078061  280056 main.go:143] libmachine: Using SSH client type: native
	I1121 14:40:44.078262  280056 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1121 14:40:44.078281  280056 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 14:40:44.359271  280056 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 14:40:44.359300  280056 machine.go:97] duration metric: took 4.113090842s to provisionDockerMachine
	I1121 14:40:44.359333  280056 start.go:293] postStartSetup for "newest-cni-696683" (driver="docker")
	I1121 14:40:44.359359  280056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 14:40:44.359441  280056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 14:40:44.359503  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:44.377727  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:44.471221  280056 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 14:40:44.474531  280056 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 14:40:44.474582  280056 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 14:40:44.474595  280056 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/addons for local assets ...
	I1121 14:40:44.474657  280056 filesync.go:126] Scanning /home/jenkins/minikube-integration/21847-11045/.minikube/files for local assets ...
	I1121 14:40:44.474769  280056 filesync.go:149] local asset: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem -> 145422.pem in /etc/ssl/certs
	I1121 14:40:44.474885  280056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1121 14:40:44.482193  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:40:44.498766  280056 start.go:296] duration metric: took 139.419384ms for postStartSetup
	I1121 14:40:44.498841  280056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:40:44.498885  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:44.516283  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:44.607254  280056 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 14:40:44.611996  280056 fix.go:56] duration metric: took 4.693269423s for fixHost
	I1121 14:40:44.612015  280056 start.go:83] releasing machines lock for "newest-cni-696683", held for 4.693312828s
	I1121 14:40:44.612074  280056 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-696683
	I1121 14:40:44.629484  280056 ssh_runner.go:195] Run: cat /version.json
	I1121 14:40:44.629530  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:44.629596  280056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 14:40:44.629660  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:44.646651  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:44.647257  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	W1121 14:40:41.693191  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	W1121 14:40:43.693977  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	I1121 14:40:44.789269  280056 ssh_runner.go:195] Run: systemctl --version
	I1121 14:40:44.795157  280056 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 14:40:44.829469  280056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 14:40:44.833726  280056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 14:40:44.833770  280056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 14:40:44.841442  280056 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 14:40:44.841462  280056 start.go:496] detecting cgroup driver to use...
	I1121 14:40:44.841500  280056 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 14:40:44.841546  280056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 14:40:44.855704  280056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 14:40:44.867322  280056 docker.go:218] disabling cri-docker service (if available) ...
	I1121 14:40:44.867355  280056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 14:40:44.880286  280056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 14:40:44.891778  280056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 14:40:44.971173  280056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 14:40:45.053340  280056 docker.go:234] disabling docker service ...
	I1121 14:40:45.053430  280056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 14:40:45.066798  280056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 14:40:45.078751  280056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 14:40:45.158914  280056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 14:40:45.236074  280056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 14:40:45.247464  280056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 14:40:45.260830  280056 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 14:40:45.260881  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.268922  280056 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1121 14:40:45.268972  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.276871  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.284760  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.292909  280056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 14:40:45.300239  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.308497  280056 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.316091  280056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 14:40:45.324294  280056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 14:40:45.330973  280056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 14:40:45.337651  280056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:40:45.412162  280056 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 14:40:45.548953  280056 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 14:40:45.549022  280056 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 14:40:45.552808  280056 start.go:564] Will wait 60s for crictl version
	I1121 14:40:45.552866  280056 ssh_runner.go:195] Run: which crictl
	I1121 14:40:45.556653  280056 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 14:40:45.580611  280056 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 14:40:45.580686  280056 ssh_runner.go:195] Run: crio --version
	I1121 14:40:45.607820  280056 ssh_runner.go:195] Run: crio --version
	I1121 14:40:45.636081  280056 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 14:40:45.637049  280056 cli_runner.go:164] Run: docker network inspect newest-cni-696683 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:40:45.652698  280056 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1121 14:40:45.656512  280056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:40:45.667700  280056 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1121 14:40:45.668667  280056 kubeadm.go:884] updating cluster {Name:newest-cni-696683 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 14:40:45.668785  280056 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:40:45.668828  280056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:40:45.700321  280056 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:40:45.700343  280056 crio.go:433] Images already preloaded, skipping extraction
	I1121 14:40:45.700378  280056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 14:40:45.724113  280056 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 14:40:45.724131  280056 cache_images.go:86] Images are preloaded, skipping loading
	I1121 14:40:45.724139  280056 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1121 14:40:45.724223  280056 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-696683 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 14:40:45.724281  280056 ssh_runner.go:195] Run: crio config
	I1121 14:40:45.769317  280056 cni.go:84] Creating CNI manager for ""
	I1121 14:40:45.769335  280056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 14:40:45.769351  280056 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1121 14:40:45.769371  280056 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-696683 NodeName:newest-cni-696683 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 14:40:45.769497  280056 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-696683"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 14:40:45.769548  280056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 14:40:45.777468  280056 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 14:40:45.777525  280056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 14:40:45.785019  280056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1121 14:40:45.796834  280056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 14:40:45.808433  280056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1121 14:40:45.820149  280056 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1121 14:40:45.823519  280056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 14:40:45.832775  280056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:40:45.917710  280056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:40:45.942977  280056 certs.go:69] Setting up /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683 for IP: 192.168.85.2
	I1121 14:40:45.942996  280056 certs.go:195] generating shared ca certs ...
	I1121 14:40:45.943016  280056 certs.go:227] acquiring lock for ca certs: {Name:mkde3a7d6f17b238f06eab3a140993599f1b4367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:45.943143  280056 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key
	I1121 14:40:45.943197  280056 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key
	I1121 14:40:45.943209  280056 certs.go:257] generating profile certs ...
	I1121 14:40:45.943287  280056 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/client.key
	I1121 14:40:45.943338  280056 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.key.78303e51
	I1121 14:40:45.943372  280056 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/proxy-client.key
	I1121 14:40:45.943471  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem (1338 bytes)
	W1121 14:40:45.943505  280056 certs.go:480] ignoring /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542_empty.pem, impossibly tiny 0 bytes
	I1121 14:40:45.943516  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 14:40:45.943543  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem (1078 bytes)
	I1121 14:40:45.943582  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem (1123 bytes)
	I1121 14:40:45.943611  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/certs/key.pem (1679 bytes)
	I1121 14:40:45.943651  280056 certs.go:484] found cert: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem (1708 bytes)
	I1121 14:40:45.944261  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 14:40:45.962656  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 14:40:45.981773  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 14:40:46.000183  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1121 14:40:46.026245  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 14:40:46.046663  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1121 14:40:46.062648  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 14:40:46.079837  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/newest-cni-696683/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 14:40:46.096146  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 14:40:46.112465  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/certs/14542.pem --> /usr/share/ca-certificates/14542.pem (1338 bytes)
	I1121 14:40:46.128984  280056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/ssl/certs/145422.pem --> /usr/share/ca-certificates/145422.pem (1708 bytes)
	I1121 14:40:46.145773  280056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 14:40:46.157581  280056 ssh_runner.go:195] Run: openssl version
	I1121 14:40:46.163196  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145422.pem && ln -fs /usr/share/ca-certificates/145422.pem /etc/ssl/certs/145422.pem"
	I1121 14:40:46.171390  280056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145422.pem
	I1121 14:40:46.174733  280056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 14:01 /usr/share/ca-certificates/145422.pem
	I1121 14:40:46.174777  280056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145422.pem
	I1121 14:40:46.211212  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145422.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 14:40:46.218830  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 14:40:46.226780  280056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:46.230239  280056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 13:56 /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:46.230281  280056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 14:40:46.264064  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 14:40:46.271501  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14542.pem && ln -fs /usr/share/ca-certificates/14542.pem /etc/ssl/certs/14542.pem"
	I1121 14:40:46.279591  280056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14542.pem
	I1121 14:40:46.282952  280056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 14:01 /usr/share/ca-certificates/14542.pem
	I1121 14:40:46.282984  280056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14542.pem
	I1121 14:40:46.316214  280056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14542.pem /etc/ssl/certs/51391683.0"
	I1121 14:40:46.323317  280056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 14:40:46.327082  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 14:40:46.362145  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 14:40:46.397494  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 14:40:46.432068  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 14:40:46.476192  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 14:40:46.524752  280056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 14:40:46.572490  280056 kubeadm.go:401] StartCluster: {Name:newest-cni-696683 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-696683 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:40:46.572631  280056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 14:40:46.572688  280056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 14:40:46.608977  280056 cri.go:89] found id: "15917aa53c8197587b8ebdb80d10a679b2a7abe6ff5a81b0d4f5a42900e02412"
	I1121 14:40:46.609002  280056 cri.go:89] found id: "76d7dc76ff36d3fb6387582649e9fc04ab3c1cbd059a19721997d005f5434abc"
	I1121 14:40:46.609007  280056 cri.go:89] found id: "ecf45bf1d37d86d0c9346ae4b4f597dfe7f80bbc47df49bb0994a548a8922b4b"
	I1121 14:40:46.609011  280056 cri.go:89] found id: "958b1593ef47f88f59d980553b03bdcf6b5f2c94efadd777421a6a497aa6ba37"
	I1121 14:40:46.609015  280056 cri.go:89] found id: ""
	I1121 14:40:46.609064  280056 ssh_runner.go:195] Run: sudo runc list -f json
	W1121 14:40:46.623457  280056 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:40:46Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:40:46.623543  280056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 14:40:46.631552  280056 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 14:40:46.631604  280056 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 14:40:46.631642  280056 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 14:40:46.639112  280056 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:40:46.640392  280056 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-696683" does not appear in /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:40:46.641294  280056 kubeconfig.go:62] /home/jenkins/minikube-integration/21847-11045/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-696683" cluster setting kubeconfig missing "newest-cni-696683" context setting]
	I1121 14:40:46.642656  280056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:46.644901  280056 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 14:40:46.652585  280056 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1121 14:40:46.652614  280056 kubeadm.go:602] duration metric: took 21.003413ms to restartPrimaryControlPlane
	I1121 14:40:46.652622  280056 kubeadm.go:403] duration metric: took 80.144736ms to StartCluster
	I1121 14:40:46.652645  280056 settings.go:142] acquiring lock: {Name:mkb207cf001a407898b2dbfd9fb9b3881f173a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:46.652695  280056 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:40:46.655150  280056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:40:46.655378  280056 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:40:46.655488  280056 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:40:46.655593  280056 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-696683"
	I1121 14:40:46.655610  280056 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-696683"
	W1121 14:40:46.655619  280056 addons.go:248] addon storage-provisioner should already be in state true
	I1121 14:40:46.655632  280056 config.go:182] Loaded profile config "newest-cni-696683": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:40:46.655645  280056 host.go:66] Checking if "newest-cni-696683" exists ...
	I1121 14:40:46.655665  280056 addons.go:70] Setting dashboard=true in profile "newest-cni-696683"
	I1121 14:40:46.655693  280056 addons.go:239] Setting addon dashboard=true in "newest-cni-696683"
	W1121 14:40:46.655703  280056 addons.go:248] addon dashboard should already be in state true
	I1121 14:40:46.655689  280056 addons.go:70] Setting default-storageclass=true in profile "newest-cni-696683"
	I1121 14:40:46.655739  280056 host.go:66] Checking if "newest-cni-696683" exists ...
	I1121 14:40:46.655746  280056 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-696683"
	I1121 14:40:46.656081  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:46.656134  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:46.656263  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:46.659699  280056 out.go:179] * Verifying Kubernetes components...
	I1121 14:40:46.660933  280056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:40:46.682004  280056 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1121 14:40:46.682004  280056 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:40:46.682500  280056 addons.go:239] Setting addon default-storageclass=true in "newest-cni-696683"
	W1121 14:40:46.682522  280056 addons.go:248] addon default-storageclass should already be in state true
	I1121 14:40:46.682547  280056 host.go:66] Checking if "newest-cni-696683" exists ...
	I1121 14:40:46.683001  280056 cli_runner.go:164] Run: docker container inspect newest-cni-696683 --format={{.State.Status}}
	I1121 14:40:46.686740  280056 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:46.686759  280056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:40:46.686806  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:46.688141  280056 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1121 14:40:43.976021  269911 node_ready.go:57] node "auto-989875" has "Ready":"False" status (will retry)
	W1121 14:40:45.976308  269911 node_ready.go:57] node "auto-989875" has "Ready":"False" status (will retry)
	I1121 14:40:46.689188  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1121 14:40:46.689209  280056 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1121 14:40:46.689271  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:46.713217  280056 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:46.713242  280056 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:40:46.713295  280056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-696683
	I1121 14:40:46.720516  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:46.724111  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:46.739551  280056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/newest-cni-696683/id_rsa Username:docker}
	I1121 14:40:46.802053  280056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:40:46.815536  280056 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:40:46.815609  280056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:40:46.828043  280056 api_server.go:72] duration metric: took 172.633997ms to wait for apiserver process to appear ...
	I1121 14:40:46.828064  280056 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:40:46.828080  280056 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:40:46.838809  280056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:40:46.840678  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1121 14:40:46.840695  280056 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1121 14:40:46.852409  280056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:40:46.856391  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1121 14:40:46.856410  280056 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1121 14:40:46.871966  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1121 14:40:46.871983  280056 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1121 14:40:46.887375  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1121 14:40:46.887424  280056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1121 14:40:46.902141  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1121 14:40:46.902162  280056 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1121 14:40:46.917178  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1121 14:40:46.917195  280056 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1121 14:40:46.930976  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1121 14:40:46.930993  280056 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1121 14:40:46.944066  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1121 14:40:46.944083  280056 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1121 14:40:46.956286  280056 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1121 14:40:46.956305  280056 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1121 14:40:46.968997  280056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1121 14:40:48.462794  280056 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1121 14:40:48.462825  280056 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1121 14:40:48.462841  280056 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:40:48.469024  280056 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1121 14:40:48.469051  280056 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1121 14:40:48.829162  280056 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:40:48.834337  280056 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 14:40:48.834367  280056 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 14:40:48.963574  280056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.124616797s)
	I1121 14:40:48.963650  280056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.111184837s)
	I1121 14:40:48.963723  280056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.994699389s)
	I1121 14:40:48.965217  280056 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-696683 addons enable metrics-server
	
	I1121 14:40:48.973715  280056 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1121 14:40:48.974963  280056 addons.go:530] duration metric: took 2.319478862s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1121 14:40:49.328711  280056 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:40:49.333400  280056 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 14:40:49.333420  280056 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 14:40:49.829132  280056 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1121 14:40:49.833697  280056 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1121 14:40:49.834649  280056 api_server.go:141] control plane version: v1.34.1
	I1121 14:40:49.834670  280056 api_server.go:131] duration metric: took 3.006599871s to wait for apiserver health ...
	I1121 14:40:49.834678  280056 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:40:49.838227  280056 system_pods.go:59] 8 kube-system pods found
	I1121 14:40:49.838263  280056 system_pods.go:61] "coredns-66bc5c9577-ncl4f" [93a097a2-31da-4456-8435-e1a976f3d7f9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1121 14:40:49.838273  280056 system_pods.go:61] "etcd-newest-cni-696683" [113e31f1-f22b-4ed8-adcb-8c12d55e1f4c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 14:40:49.838286  280056 system_pods.go:61] "kindnet-m6v5n" [98b995f3-7968-4e19-abc1-10772001bd6c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1121 14:40:49.838301  280056 system_pods.go:61] "kube-apiserver-newest-cni-696683" [a046bba0-991c-4291-b89a-a0e64e3686b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 14:40:49.838311  280056 system_pods.go:61] "kube-controller-manager-newest-cni-696683" [dd3689f1-9ccf-4bca-8147-1779d92c3598] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 14:40:49.838318  280056 system_pods.go:61] "kube-proxy-2dkdg" [13ba7b82-bf92-4b76-a812-685c12ecb21c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1121 14:40:49.838331  280056 system_pods.go:61] "kube-scheduler-newest-cni-696683" [57fd312e-bc77-4ecb-9f3b-caa50247e033] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 14:40:49.838337  280056 system_pods.go:61] "storage-provisioner" [3cf44ed4-4cd8-4655-aef5-38415eb66de4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1121 14:40:49.838351  280056 system_pods.go:74] duration metric: took 3.666864ms to wait for pod list to return data ...
	I1121 14:40:49.838364  280056 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:40:49.840748  280056 default_sa.go:45] found service account: "default"
	I1121 14:40:49.840769  280056 default_sa.go:55] duration metric: took 2.395802ms for default service account to be created ...
	I1121 14:40:49.840783  280056 kubeadm.go:587] duration metric: took 3.185377365s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1121 14:40:49.840808  280056 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:40:49.842953  280056 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:40:49.842978  280056 node_conditions.go:123] node cpu capacity is 8
	I1121 14:40:49.842993  280056 node_conditions.go:105] duration metric: took 2.175119ms to run NodePressure ...
	I1121 14:40:49.843009  280056 start.go:242] waiting for startup goroutines ...
	I1121 14:40:49.843022  280056 start.go:247] waiting for cluster config update ...
	I1121 14:40:49.843039  280056 start.go:256] writing updated cluster config ...
	I1121 14:40:49.843325  280056 ssh_runner.go:195] Run: rm -f paused
	I1121 14:40:49.887622  280056 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:40:49.890008  280056 out.go:179] * Done! kubectl is now configured to use "newest-cni-696683" cluster and "default" namespace by default
	W1121 14:40:45.694185  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	W1121 14:40:47.694654  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	W1121 14:40:50.194324  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	W1121 14:40:48.477939  269911 node_ready.go:57] node "auto-989875" has "Ready":"False" status (will retry)
	I1121 14:40:50.487901  269911 node_ready.go:49] node "auto-989875" is "Ready"
	I1121 14:40:50.487937  269911 node_ready.go:38] duration metric: took 11.014560663s for node "auto-989875" to be "Ready" ...
	I1121 14:40:50.487951  269911 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:40:50.488000  269911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:40:50.506437  269911 api_server.go:72] duration metric: took 11.333890908s to wait for apiserver process to appear ...
	I1121 14:40:50.506462  269911 api_server.go:88] waiting for apiserver healthz status ...
	I1121 14:40:50.506481  269911 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1121 14:40:50.511381  269911 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1121 14:40:50.512348  269911 api_server.go:141] control plane version: v1.34.1
	I1121 14:40:50.512374  269911 api_server.go:131] duration metric: took 5.904455ms to wait for apiserver health ...
	I1121 14:40:50.512385  269911 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 14:40:50.516900  269911 system_pods.go:59] 8 kube-system pods found
	I1121 14:40:50.516933  269911 system_pods.go:61] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:50.516942  269911 system_pods.go:61] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:50.516954  269911 system_pods.go:61] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:50.516961  269911 system_pods.go:61] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:50.516969  269911 system_pods.go:61] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:50.516975  269911 system_pods.go:61] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:50.516983  269911 system_pods.go:61] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:50.516988  269911 system_pods.go:61] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Pending
	I1121 14:40:50.516995  269911 system_pods.go:74] duration metric: took 4.603561ms to wait for pod list to return data ...
	I1121 14:40:50.517018  269911 default_sa.go:34] waiting for default service account to be created ...
	I1121 14:40:50.519971  269911 default_sa.go:45] found service account: "default"
	I1121 14:40:50.519990  269911 default_sa.go:55] duration metric: took 2.962898ms for default service account to be created ...
	I1121 14:40:50.520000  269911 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 14:40:50.523136  269911 system_pods.go:86] 8 kube-system pods found
	I1121 14:40:50.523178  269911 system_pods.go:89] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:50.523193  269911 system_pods.go:89] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:50.523202  269911 system_pods.go:89] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:50.523207  269911 system_pods.go:89] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:50.523212  269911 system_pods.go:89] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:50.523218  269911 system_pods.go:89] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:50.523222  269911 system_pods.go:89] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:50.523233  269911 system_pods.go:89] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Pending
	I1121 14:40:50.523254  269911 retry.go:31] will retry after 276.59635ms: missing components: kube-dns
	I1121 14:40:50.803782  269911 system_pods.go:86] 8 kube-system pods found
	I1121 14:40:50.803812  269911 system_pods.go:89] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:50.803820  269911 system_pods.go:89] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:50.803826  269911 system_pods.go:89] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:50.803830  269911 system_pods.go:89] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:50.803843  269911 system_pods.go:89] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:50.803847  269911 system_pods.go:89] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:50.803850  269911 system_pods.go:89] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:50.803854  269911 system_pods.go:89] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:40:50.803868  269911 retry.go:31] will retry after 254.453611ms: missing components: kube-dns
	I1121 14:40:51.063022  269911 system_pods.go:86] 8 kube-system pods found
	I1121 14:40:51.063048  269911 system_pods.go:89] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:51.063054  269911 system_pods.go:89] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:51.063060  269911 system_pods.go:89] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:51.063064  269911 system_pods.go:89] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:51.063070  269911 system_pods.go:89] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:51.063073  269911 system_pods.go:89] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:51.063076  269911 system_pods.go:89] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:51.063080  269911 system_pods.go:89] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:40:51.063093  269911 retry.go:31] will retry after 307.771212ms: missing components: kube-dns
	I1121 14:40:51.375222  269911 system_pods.go:86] 8 kube-system pods found
	I1121 14:40:51.375255  269911 system_pods.go:89] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 14:40:51.375268  269911 system_pods.go:89] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:51.375276  269911 system_pods.go:89] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:51.375282  269911 system_pods.go:89] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:51.375288  269911 system_pods.go:89] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:51.375299  269911 system_pods.go:89] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:51.375304  269911 system_pods.go:89] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:51.375315  269911 system_pods.go:89] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 14:40:51.375332  269911 retry.go:31] will retry after 408.234241ms: missing components: kube-dns
	I1121 14:40:51.790035  269911 system_pods.go:86] 8 kube-system pods found
	I1121 14:40:51.790067  269911 system_pods.go:89] "coredns-66bc5c9577-r6m4z" [050f6a33-4acd-441d-9ffa-0384c6fcdbdf] Running
	I1121 14:40:51.790076  269911 system_pods.go:89] "etcd-auto-989875" [cbb085cf-35e7-4be8-94c5-391f93e6784d] Running
	I1121 14:40:51.790082  269911 system_pods.go:89] "kindnet-n97mg" [8ca7fbd4-4efa-4144-8e83-f44d0f4d1747] Running
	I1121 14:40:51.790088  269911 system_pods.go:89] "kube-apiserver-auto-989875" [587816fe-3a7e-40b5-b508-1c10e25f7b74] Running
	I1121 14:40:51.790095  269911 system_pods.go:89] "kube-controller-manager-auto-989875" [43506555-d8e8-4f7f-9497-7bd5ce8c9e23] Running
	I1121 14:40:51.790101  269911 system_pods.go:89] "kube-proxy-ttpnr" [ad2df993-a186-4379-bda7-c72cc3ba8c76] Running
	I1121 14:40:51.790106  269911 system_pods.go:89] "kube-scheduler-auto-989875" [506658bb-5300-4ba6-b7aa-f0ee83ba8351] Running
	I1121 14:40:51.790111  269911 system_pods.go:89] "storage-provisioner" [4e53a581-40c0-46fa-b94d-33fbda58669c] Running
	I1121 14:40:51.790124  269911 system_pods.go:126] duration metric: took 1.270114943s to wait for k8s-apps to be running ...
	I1121 14:40:51.790137  269911 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 14:40:51.790190  269911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:40:51.806346  269911 system_svc.go:56] duration metric: took 16.201575ms WaitForService to wait for kubelet
	I1121 14:40:51.806377  269911 kubeadm.go:587] duration metric: took 12.633833991s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:40:51.806402  269911 node_conditions.go:102] verifying NodePressure condition ...
	I1121 14:40:51.808958  269911 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 14:40:51.808980  269911 node_conditions.go:123] node cpu capacity is 8
	I1121 14:40:51.808992  269911 node_conditions.go:105] duration metric: took 2.584392ms to run NodePressure ...
	I1121 14:40:51.809003  269911 start.go:242] waiting for startup goroutines ...
	I1121 14:40:51.809009  269911 start.go:247] waiting for cluster config update ...
	I1121 14:40:51.809019  269911 start.go:256] writing updated cluster config ...
	I1121 14:40:51.809271  269911 ssh_runner.go:195] Run: rm -f paused
	I1121 14:40:51.812826  269911 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:40:51.816346  269911 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r6m4z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.820311  269911 pod_ready.go:94] pod "coredns-66bc5c9577-r6m4z" is "Ready"
	I1121 14:40:51.820332  269911 pod_ready.go:86] duration metric: took 3.96803ms for pod "coredns-66bc5c9577-r6m4z" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.822259  269911 pod_ready.go:83] waiting for pod "etcd-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.826005  269911 pod_ready.go:94] pod "etcd-auto-989875" is "Ready"
	I1121 14:40:51.826024  269911 pod_ready.go:86] duration metric: took 3.74738ms for pod "etcd-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.827872  269911 pod_ready.go:83] waiting for pod "kube-apiserver-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.831284  269911 pod_ready.go:94] pod "kube-apiserver-auto-989875" is "Ready"
	I1121 14:40:51.831303  269911 pod_ready.go:86] duration metric: took 3.411512ms for pod "kube-apiserver-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:51.833002  269911 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:52.217619  269911 pod_ready.go:94] pod "kube-controller-manager-auto-989875" is "Ready"
	I1121 14:40:52.217641  269911 pod_ready.go:86] duration metric: took 384.619243ms for pod "kube-controller-manager-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:52.417771  269911 pod_ready.go:83] waiting for pod "kube-proxy-ttpnr" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:52.816803  269911 pod_ready.go:94] pod "kube-proxy-ttpnr" is "Ready"
	I1121 14:40:52.816827  269911 pod_ready.go:86] duration metric: took 399.031224ms for pod "kube-proxy-ttpnr" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:53.017303  269911 pod_ready.go:83] waiting for pod "kube-scheduler-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:53.417158  269911 pod_ready.go:94] pod "kube-scheduler-auto-989875" is "Ready"
	I1121 14:40:53.417179  269911 pod_ready.go:86] duration metric: took 399.853474ms for pod "kube-scheduler-auto-989875" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 14:40:53.417190  269911 pod_ready.go:40] duration metric: took 1.604337241s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 14:40:53.465649  269911 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 14:40:53.467062  269911 out.go:179] * Done! kubectl is now configured to use "auto-989875" cluster and "default" namespace by default
	W1121 14:40:52.194537  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	W1121 14:40:54.194637  266798 node_ready.go:57] node "default-k8s-diff-port-859276" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.746164119Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.746186547Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.746206549Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.746330546Z" level=info msg="Created container 1c1be146e89e0ac17f82a731250519d13aa441f10efd72b55f37ffd6a8766f48: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hp5ll/kubernetes-dashboard" id=13ef77b8-7a30-46c4-94e8-5791575a7472 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.746882165Z" level=info msg="Starting container: 1c1be146e89e0ac17f82a731250519d13aa441f10efd72b55f37ffd6a8766f48" id=a80c3d77-e52a-4255-ad0d-0c5c5f6da099 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.748765743Z" level=info msg="Started container" PID=1657 containerID=1c1be146e89e0ac17f82a731250519d13aa441f10efd72b55f37ffd6a8766f48 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hp5ll/kubernetes-dashboard id=a80c3d77-e52a-4255-ad0d-0c5c5f6da099 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5014c5412846f5a852cccc0619ef0a487f1f114c8c0a43f0fe51d304a08cf54f
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.750089913Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.750112762Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.750134778Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.754230158Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.754250842Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.754266665Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.757688604Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:40:32 embed-certs-441390 crio[558]: time="2025-11-21T14:40:32.757713766Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:40:45 embed-certs-441390 crio[558]: time="2025-11-21T14:40:45.860428459Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=68b88f20-36c7-42b7-9b77-309d6774850f name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:40:45 embed-certs-441390 crio[558]: time="2025-11-21T14:40:45.865030409Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f39b62d5-2591-45d1-a605-cda1ef566e9c name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:40:45 embed-certs-441390 crio[558]: time="2025-11-21T14:40:45.868488197Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9hgv/dashboard-metrics-scraper" id=70a9d142-8d41-4ab3-b3b8-c8b754451ade name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:40:45 embed-certs-441390 crio[558]: time="2025-11-21T14:40:45.868652678Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:45 embed-certs-441390 crio[558]: time="2025-11-21T14:40:45.875868163Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:45 embed-certs-441390 crio[558]: time="2025-11-21T14:40:45.876386059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:40:45 embed-certs-441390 crio[558]: time="2025-11-21T14:40:45.905728764Z" level=info msg="Created container ae9c6391b9097ef248f2a7247fad69dec5e3f671efd813eb51b9341624a3d3d5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9hgv/dashboard-metrics-scraper" id=70a9d142-8d41-4ab3-b3b8-c8b754451ade name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:40:45 embed-certs-441390 crio[558]: time="2025-11-21T14:40:45.906265302Z" level=info msg="Starting container: ae9c6391b9097ef248f2a7247fad69dec5e3f671efd813eb51b9341624a3d3d5" id=6d0b38ee-683c-4af2-9a32-17096fa977bb name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:40:45 embed-certs-441390 crio[558]: time="2025-11-21T14:40:45.908060259Z" level=info msg="Started container" PID=1758 containerID=ae9c6391b9097ef248f2a7247fad69dec5e3f671efd813eb51b9341624a3d3d5 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9hgv/dashboard-metrics-scraper id=6d0b38ee-683c-4af2-9a32-17096fa977bb name=/runtime.v1.RuntimeService/StartContainer sandboxID=01ddbbee4dcdcf0488062e173c82117348aa3dfc63bed60deddbdb50caa395de
	Nov 21 14:40:46 embed-certs-441390 crio[558]: time="2025-11-21T14:40:46.016932487Z" level=info msg="Removing container: 0052b2528fbb8a39a9cd0cbb4770df81185fc9bdb90d8106906ed8acb4863030" id=c541b19f-b6cf-4a05-b6f5-c418ffc3ecc9 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 14:40:46 embed-certs-441390 crio[558]: time="2025-11-21T14:40:46.031938806Z" level=info msg="Removed container 0052b2528fbb8a39a9cd0cbb4770df81185fc9bdb90d8106906ed8acb4863030: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9hgv/dashboard-metrics-scraper" id=c541b19f-b6cf-4a05-b6f5-c418ffc3ecc9 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	ae9c6391b9097       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago      Exited              dashboard-metrics-scraper   2                   01ddbbee4dcdc       dashboard-metrics-scraper-6ffb444bf9-s9hgv   kubernetes-dashboard
	1c1be146e89e0       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   24 seconds ago      Running             kubernetes-dashboard        0                   5014c5412846f       kubernetes-dashboard-855c9754f9-hp5ll        kubernetes-dashboard
	95767056d7b10       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           34 seconds ago      Running             coredns                     0                   d3c120471293f       coredns-66bc5c9577-sbjhs                     kube-system
	1f4f5f406d42a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           34 seconds ago      Running             busybox                     1                   b37906500ef0a       busybox                                      default
	fa41668278e81       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           34 seconds ago      Exited              storage-provisioner         0                   d047aba838c92       storage-provisioner                          kube-system
	b3a0a42501ce1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           34 seconds ago      Running             kindnet-cni                 0                   1a741fa733a71       kindnet-pg6qj                                kube-system
	81750ffcd002e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           34 seconds ago      Running             kube-proxy                  0                   2e36f3f9a3334       kube-proxy-m2nzt                             kube-system
	d77dc478ac3af       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           37 seconds ago      Running             etcd                        0                   a14a1a9a6c820       etcd-embed-certs-441390                      kube-system
	26a9f97051884       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           37 seconds ago      Running             kube-controller-manager     0                   0884898cba1d3       kube-controller-manager-embed-certs-441390   kube-system
	89526413a6f74       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           37 seconds ago      Running             kube-apiserver              0                   e9c9066811e1e       kube-apiserver-embed-certs-441390            kube-system
	0760d88ca4d91       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           37 seconds ago      Running             kube-scheduler              0                   5eba25b27e5af       kube-scheduler-embed-certs-441390            kube-system
	
	
	==> coredns [95767056d7b108c98614d1dd0610b157001ab0f1932ef578fc0f0e6fdf7a90bb] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33037 - 18880 "HINFO IN 4834428811184416202.4769941339570836985. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.094540512s
	
	
	==> describe nodes <==
	Name:               embed-certs-441390
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-441390
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=embed-certs-441390
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_39_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:39:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-441390
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:40:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:40:21 +0000   Fri, 21 Nov 2025 14:39:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:40:21 +0000   Fri, 21 Nov 2025 14:39:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:40:21 +0000   Fri, 21 Nov 2025 14:39:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:40:21 +0000   Fri, 21 Nov 2025 14:39:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-441390
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                f6f8f703-6de7-4044-b431-06d9e8823119
	  Boot ID:                    4deed74e-d1ae-403f-b303-7338c681df31
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 coredns-66bc5c9577-sbjhs                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     87s
	  kube-system                 etcd-embed-certs-441390                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         92s
	  kube-system                 kindnet-pg6qj                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      87s
	  kube-system                 kube-apiserver-embed-certs-441390             250m (3%)     0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-controller-manager-embed-certs-441390    200m (2%)     0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-proxy-m2nzt                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-scheduler-embed-certs-441390             100m (1%)     0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-s9hgv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hp5ll         0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 86s                kube-proxy       
	  Normal  Starting                 34s                kube-proxy       
	  Normal  Starting                 97s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  97s (x8 over 97s)  kubelet          Node embed-certs-441390 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s (x8 over 97s)  kubelet          Node embed-certs-441390 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s (x8 over 97s)  kubelet          Node embed-certs-441390 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    92s                kubelet          Node embed-certs-441390 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  92s                kubelet          Node embed-certs-441390 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     92s                kubelet          Node embed-certs-441390 status is now: NodeHasSufficientPID
	  Normal  Starting                 92s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           88s                node-controller  Node embed-certs-441390 event: Registered Node embed-certs-441390 in Controller
	  Normal  NodeReady                75s                kubelet          Node embed-certs-441390 status is now: NodeReady
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s (x8 over 38s)  kubelet          Node embed-certs-441390 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x8 over 38s)  kubelet          Node embed-certs-441390 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x8 over 38s)  kubelet          Node embed-certs-441390 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s                node-controller  Node embed-certs-441390 event: Registered Node embed-certs-441390 in Controller
	
	
	==> dmesg <==
	[  +0.087005] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024870] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.407014] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 13:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.052324] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023964] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +2.047715] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +4.031557] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +8.511113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[ +16.382337] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[Nov21 13:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	
	
	==> etcd [d77dc478ac3af84688fd0b17964f2958adba526b379a9beee309ee2ce20ef8ab] <==
	{"level":"warn","ts":"2025-11-21T14:40:20.887395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.905596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.918164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.928115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.936831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.944593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.951989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.961604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.969477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.978745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.984530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.995192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.000215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.007860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.015346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.022117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.030006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.037418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.044920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.053392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.064712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.082438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.089463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.096288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.171148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50698","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:40:57 up  1:23,  0 user,  load average: 4.40, 3.04, 1.94
	Linux embed-certs-441390 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b3a0a42501ce1cf520bdc52efb98a024759ca435ce8b9848519add262264914a] <==
	I1121 14:40:22.430933       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:40:22.431287       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1121 14:40:22.431413       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:40:22.431429       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:40:22.431454       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:40:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:40:22.732973       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:40:22.732997       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:40:22.733009       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:40:22.733643       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1121 14:40:23.033736       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:40:23.033771       1 metrics.go:72] Registering metrics
	I1121 14:40:23.130934       1 controller.go:711] "Syncing nftables rules"
	I1121 14:40:32.733184       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1121 14:40:32.733241       1 main.go:301] handling current node
	I1121 14:40:42.734634       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1121 14:40:42.734678       1 main.go:301] handling current node
	I1121 14:40:52.737634       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1121 14:40:52.737663       1 main.go:301] handling current node
	
	
	==> kube-apiserver [89526413a6f7420bdb1189dd04428b799769f4b2d2b5cbf920e078cac420b1ac] <==
	I1121 14:40:21.871772       1 autoregister_controller.go:144] Starting autoregister controller
	I1121 14:40:21.871803       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:40:21.871828       1 cache.go:39] Caches are synced for autoregister controller
	I1121 14:40:21.872087       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 14:40:21.919834       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1121 14:40:21.919880       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1121 14:40:21.919892       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1121 14:40:21.919954       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1121 14:40:21.921021       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:40:21.921888       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1121 14:40:21.922786       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1121 14:40:21.922860       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:40:21.937215       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:40:21.943621       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 14:40:21.956627       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:40:22.287199       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 14:40:22.317847       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:40:22.337543       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:40:22.347751       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:40:22.389238       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.50.240"}
	I1121 14:40:22.399433       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.2.118"}
	I1121 14:40:22.757926       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:40:25.209606       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:40:25.608357       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:40:25.707761       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [26a9f970518848d1b900fdd0d942efb823d83d328447dae842211d42697b5a1a] <==
	I1121 14:40:25.207991       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:40:25.208818       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 14:40:25.208841       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 14:40:25.210329       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 14:40:25.211247       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 14:40:25.211406       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1121 14:40:25.211507       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1121 14:40:25.211918       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1121 14:40:25.211933       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1121 14:40:25.211941       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1121 14:40:25.212623       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1121 14:40:25.212667       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 14:40:25.213601       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1121 14:40:25.213695       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1121 14:40:25.215128       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1121 14:40:25.215735       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:40:25.215748       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 14:40:25.216214       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 14:40:25.220425       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 14:40:25.241686       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1121 14:40:25.241712       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:40:25.246002       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1121 14:40:25.246117       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 14:40:25.246302       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-441390"
	I1121 14:40:25.246422       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [81750ffcd002e880aab0fc94c97b3c53e0c4bfc576f3abd469310642ac74e31c] <==
	I1121 14:40:22.262465       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:40:22.333825       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:40:22.435189       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:40:22.435837       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1121 14:40:22.435988       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:40:22.459330       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:40:22.459444       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:40:22.466267       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:40:22.466762       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:40:22.466800       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:40:22.472808       1 config.go:200] "Starting service config controller"
	I1121 14:40:22.476496       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:40:22.472897       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:40:22.473430       1 config.go:309] "Starting node config controller"
	I1121 14:40:22.476632       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:40:22.476657       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:40:22.476345       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:40:22.476781       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:40:22.476829       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:40:22.576615       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 14:40:22.577809       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:40:22.577915       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0760d88ca4d91b1009c8a72ad47ebcb1d7dd0be3b46f0aa30647c629d08bd762] <==
	I1121 14:40:22.043650       1 serving.go:386] Generated self-signed cert in-memory
	I1121 14:40:23.216661       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 14:40:23.216698       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:40:23.221924       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1121 14:40:23.221965       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1121 14:40:23.222126       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 14:40:23.222152       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 14:40:23.222193       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:40:23.222207       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:40:23.222536       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 14:40:23.222715       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 14:40:23.322152       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1121 14:40:23.322212       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 14:40:23.322255       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:40:22 embed-certs-441390 kubelet[715]: I1121 14:40:22.004004     715 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-embed-certs-441390"
	Nov 21 14:40:22 embed-certs-441390 kubelet[715]: E1121 14:40:22.010472     715 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-embed-certs-441390\" already exists" pod="kube-system/kube-apiserver-embed-certs-441390"
	Nov 21 14:40:25 embed-certs-441390 kubelet[715]: I1121 14:40:25.858390     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fa58c7e1-02da-426d-a23c-4e127db4c9ae-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-hp5ll\" (UID: \"fa58c7e1-02da-426d-a23c-4e127db4c9ae\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hp5ll"
	Nov 21 14:40:25 embed-certs-441390 kubelet[715]: I1121 14:40:25.858438     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl8j6\" (UniqueName: \"kubernetes.io/projected/fa58c7e1-02da-426d-a23c-4e127db4c9ae-kube-api-access-nl8j6\") pod \"kubernetes-dashboard-855c9754f9-hp5ll\" (UID: \"fa58c7e1-02da-426d-a23c-4e127db4c9ae\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hp5ll"
	Nov 21 14:40:25 embed-certs-441390 kubelet[715]: I1121 14:40:25.858457     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-s9hgv\" (UID: \"c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9hgv"
	Nov 21 14:40:25 embed-certs-441390 kubelet[715]: I1121 14:40:25.858475     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcqg8\" (UniqueName: \"kubernetes.io/projected/c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549-kube-api-access-xcqg8\") pod \"dashboard-metrics-scraper-6ffb444bf9-s9hgv\" (UID: \"c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9hgv"
	Nov 21 14:40:28 embed-certs-441390 kubelet[715]: I1121 14:40:28.959334     715 scope.go:117] "RemoveContainer" containerID="af98fdc8a4cda40b6ccb9295f6dad35985047e76a0f94f249e21d033e9f4ad01"
	Nov 21 14:40:29 embed-certs-441390 kubelet[715]: I1121 14:40:29.840925     715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 21 14:40:29 embed-certs-441390 kubelet[715]: I1121 14:40:29.970899     715 scope.go:117] "RemoveContainer" containerID="0052b2528fbb8a39a9cd0cbb4770df81185fc9bdb90d8106906ed8acb4863030"
	Nov 21 14:40:29 embed-certs-441390 kubelet[715]: E1121 14:40:29.971067     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s9hgv_kubernetes-dashboard(c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9hgv" podUID="c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549"
	Nov 21 14:40:29 embed-certs-441390 kubelet[715]: I1121 14:40:29.971161     715 scope.go:117] "RemoveContainer" containerID="af98fdc8a4cda40b6ccb9295f6dad35985047e76a0f94f249e21d033e9f4ad01"
	Nov 21 14:40:30 embed-certs-441390 kubelet[715]: I1121 14:40:30.975068     715 scope.go:117] "RemoveContainer" containerID="0052b2528fbb8a39a9cd0cbb4770df81185fc9bdb90d8106906ed8acb4863030"
	Nov 21 14:40:30 embed-certs-441390 kubelet[715]: E1121 14:40:30.975275     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s9hgv_kubernetes-dashboard(c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9hgv" podUID="c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549"
	Nov 21 14:40:32 embed-certs-441390 kubelet[715]: I1121 14:40:32.993018     715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hp5ll" podStartSLOduration=1.387856628 podStartE2EDuration="7.992998158s" podCreationTimestamp="2025-11-21 14:40:25 +0000 UTC" firstStartedPulling="2025-11-21 14:40:26.10372327 +0000 UTC m=+7.389673288" lastFinishedPulling="2025-11-21 14:40:32.708864799 +0000 UTC m=+13.994814818" observedRunningTime="2025-11-21 14:40:32.992920464 +0000 UTC m=+14.278870503" watchObservedRunningTime="2025-11-21 14:40:32.992998158 +0000 UTC m=+14.278948198"
	Nov 21 14:40:35 embed-certs-441390 kubelet[715]: I1121 14:40:35.553261     715 scope.go:117] "RemoveContainer" containerID="0052b2528fbb8a39a9cd0cbb4770df81185fc9bdb90d8106906ed8acb4863030"
	Nov 21 14:40:35 embed-certs-441390 kubelet[715]: E1121 14:40:35.553422     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s9hgv_kubernetes-dashboard(c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9hgv" podUID="c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549"
	Nov 21 14:40:45 embed-certs-441390 kubelet[715]: I1121 14:40:45.859869     715 scope.go:117] "RemoveContainer" containerID="0052b2528fbb8a39a9cd0cbb4770df81185fc9bdb90d8106906ed8acb4863030"
	Nov 21 14:40:46 embed-certs-441390 kubelet[715]: I1121 14:40:46.015597     715 scope.go:117] "RemoveContainer" containerID="0052b2528fbb8a39a9cd0cbb4770df81185fc9bdb90d8106906ed8acb4863030"
	Nov 21 14:40:46 embed-certs-441390 kubelet[715]: I1121 14:40:46.015840     715 scope.go:117] "RemoveContainer" containerID="ae9c6391b9097ef248f2a7247fad69dec5e3f671efd813eb51b9341624a3d3d5"
	Nov 21 14:40:46 embed-certs-441390 kubelet[715]: E1121 14:40:46.016054     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s9hgv_kubernetes-dashboard(c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9hgv" podUID="c15e0e8b-5d23-48fd-a3ff-e6d30f0dd549"
	Nov 21 14:40:51 embed-certs-441390 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 14:40:51 embed-certs-441390 kubelet[715]: I1121 14:40:51.639665     715 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 21 14:40:51 embed-certs-441390 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 14:40:51 embed-certs-441390 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 21 14:40:51 embed-certs-441390 systemd[1]: kubelet.service: Consumed 1.177s CPU time.
	
	
	==> kubernetes-dashboard [1c1be146e89e0ac17f82a731250519d13aa441f10efd72b55f37ffd6a8766f48] <==
	2025/11/21 14:40:32 Using namespace: kubernetes-dashboard
	2025/11/21 14:40:32 Using in-cluster config to connect to apiserver
	2025/11/21 14:40:32 Using secret token for csrf signing
	2025/11/21 14:40:32 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/21 14:40:32 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/21 14:40:32 Successful initial request to the apiserver, version: v1.34.1
	2025/11/21 14:40:32 Generating JWE encryption key
	2025/11/21 14:40:32 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/21 14:40:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/21 14:40:32 Initializing JWE encryption key from synchronized object
	2025/11/21 14:40:32 Creating in-cluster Sidecar client
	2025/11/21 14:40:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 14:40:32 Serving insecurely on HTTP port: 9090
	2025/11/21 14:40:32 Starting overwatch
	
	
	==> storage-provisioner [fa41668278e8170f60e1dccd430711a3eed075f795e571c30cc83710f2742a90] <==
	I1121 14:40:22.240069       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1121 14:40:52.243070       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-441390 -n embed-certs-441390
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-441390 -n embed-certs-441390: exit status 2 (320.498936ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-441390 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-859276 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-859276 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (281.49639ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:41:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-859276 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-859276 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-859276 describe deploy/metrics-server -n kube-system: exit status 1 (74.004542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-859276 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-859276
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-859276:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2d534a2a3b1fe7d6e16c898b8a16a17cbf02051799732b931d600187683c7eac",
	        "Created": "2025-11-21T14:40:05.048409185Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 268531,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:40:05.082945572Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/2d534a2a3b1fe7d6e16c898b8a16a17cbf02051799732b931d600187683c7eac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2d534a2a3b1fe7d6e16c898b8a16a17cbf02051799732b931d600187683c7eac/hostname",
	        "HostsPath": "/var/lib/docker/containers/2d534a2a3b1fe7d6e16c898b8a16a17cbf02051799732b931d600187683c7eac/hosts",
	        "LogPath": "/var/lib/docker/containers/2d534a2a3b1fe7d6e16c898b8a16a17cbf02051799732b931d600187683c7eac/2d534a2a3b1fe7d6e16c898b8a16a17cbf02051799732b931d600187683c7eac-json.log",
	        "Name": "/default-k8s-diff-port-859276",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-859276:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-859276",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2d534a2a3b1fe7d6e16c898b8a16a17cbf02051799732b931d600187683c7eac",
	                "LowerDir": "/var/lib/docker/overlay2/37ea76c5bad6d11bb4dd4e337b8935aab81c4bc411039c222a9d057b3c4b3202-init/diff:/var/lib/docker/overlay2/52a35d389e2d282a009a54c52e3f2c3f22a4d7ab5a2644a29d16127d21682576/diff",
	                "MergedDir": "/var/lib/docker/overlay2/37ea76c5bad6d11bb4dd4e337b8935aab81c4bc411039c222a9d057b3c4b3202/merged",
	                "UpperDir": "/var/lib/docker/overlay2/37ea76c5bad6d11bb4dd4e337b8935aab81c4bc411039c222a9d057b3c4b3202/diff",
	                "WorkDir": "/var/lib/docker/overlay2/37ea76c5bad6d11bb4dd4e337b8935aab81c4bc411039c222a9d057b3c4b3202/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-859276",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-859276/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-859276",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-859276",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-859276",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "997018c378d4becb9d6d0a396f839bdf0393f886cf529713cc0839e88cbaa797",
	            "SandboxKey": "/var/run/docker/netns/997018c378d4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-859276": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "160a2921f00f660205c7789d2cbe27b525c000c5d85520fd19733f7917bfd7fd",
	                    "EndpointID": "a930241eab5244fd4f2421527e35b6a236a3b1dd825b49397c226a701638b0f3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "5a:f7:58:58:77:a1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-859276",
	                        "2d534a2a3b1f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-859276 -n default-k8s-diff-port-859276
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-859276 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-859276 logs -n 25: (1.212364409s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-989875 sudo systemctl cat kubelet --no-pager                                                                                                               │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │ 21 Nov 25 14:41 UTC │
	│ ssh     │ -p auto-989875 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │ 21 Nov 25 14:41 UTC │
	│ ssh     │ -p auto-989875 sudo cat /etc/kubernetes/kubelet.conf                                                                                                               │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │ 21 Nov 25 14:41 UTC │
	│ ssh     │ -p auto-989875 sudo cat /var/lib/kubelet/config.yaml                                                                                                               │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │ 21 Nov 25 14:41 UTC │
	│ ssh     │ -p auto-989875 sudo systemctl status docker --all --full --no-pager                                                                                                │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │                     │
	│ ssh     │ -p auto-989875 sudo systemctl cat docker --no-pager                                                                                                                │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │ 21 Nov 25 14:41 UTC │
	│ ssh     │ -p auto-989875 sudo cat /etc/docker/daemon.json                                                                                                                    │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │                     │
	│ ssh     │ -p auto-989875 sudo docker system info                                                                                                                             │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │                     │
	│ ssh     │ -p auto-989875 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │                     │
	│ ssh     │ -p auto-989875 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │ 21 Nov 25 14:41 UTC │
	│ ssh     │ -p auto-989875 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │                     │
	│ ssh     │ -p auto-989875 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │ 21 Nov 25 14:41 UTC │
	│ ssh     │ -p auto-989875 sudo cri-dockerd --version                                                                                                                          │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │ 21 Nov 25 14:41 UTC │
	│ ssh     │ -p auto-989875 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │                     │
	│ ssh     │ -p auto-989875 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │ 21 Nov 25 14:41 UTC │
	│ ssh     │ -p auto-989875 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │ 21 Nov 25 14:41 UTC │
	│ ssh     │ -p auto-989875 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │ 21 Nov 25 14:41 UTC │
	│ ssh     │ -p auto-989875 sudo containerd config dump                                                                                                                         │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │ 21 Nov 25 14:41 UTC │
	│ ssh     │ -p auto-989875 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │ 21 Nov 25 14:41 UTC │
	│ ssh     │ -p auto-989875 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │ 21 Nov 25 14:41 UTC │
	│ ssh     │ -p auto-989875 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │ 21 Nov 25 14:41 UTC │
	│ ssh     │ -p auto-989875 sudo crio config                                                                                                                                    │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │ 21 Nov 25 14:41 UTC │
	│ delete  │ -p auto-989875                                                                                                                                                     │ auto-989875                  │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │ 21 Nov 25 14:41 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-859276 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                 │ default-k8s-diff-port-859276 │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │                     │
	│ start   │ -p custom-flannel-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:41 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:41:21
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:41:21.955320  295624 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:41:21.955589  295624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:41:21.955602  295624 out.go:374] Setting ErrFile to fd 2...
	I1121 14:41:21.955608  295624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:41:21.955791  295624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:41:21.956220  295624 out.go:368] Setting JSON to false
	I1121 14:41:21.957311  295624 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5031,"bootTime":1763731051,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:41:21.957398  295624 start.go:143] virtualization: kvm guest
	I1121 14:41:21.959054  295624 out.go:179] * [custom-flannel-989875] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:41:21.960720  295624 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:41:21.960753  295624 notify.go:221] Checking for updates...
	I1121 14:41:21.962769  295624 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:41:21.963775  295624 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:41:21.964728  295624 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 14:41:21.965844  295624 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:41:21.967100  295624 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:41:21.968760  295624 config.go:182] Loaded profile config "calico-989875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:41:21.968902  295624 config.go:182] Loaded profile config "default-k8s-diff-port-859276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:41:21.969026  295624 config.go:182] Loaded profile config "kindnet-989875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:41:21.969138  295624 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:41:21.993794  295624 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:41:21.993857  295624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:41:22.057045  295624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-21 14:41:22.047071454 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:41:22.057143  295624 docker.go:319] overlay module found
	I1121 14:41:22.059678  295624 out.go:179] * Using the docker driver based on user configuration
	I1121 14:41:22.060717  295624 start.go:309] selected driver: docker
	I1121 14:41:22.060736  295624 start.go:930] validating driver "docker" against <nil>
	I1121 14:41:22.060750  295624 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:41:22.061306  295624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:41:22.127896  295624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-21 14:41:22.117332451 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:41:22.128084  295624 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 14:41:22.128372  295624 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:41:22.130068  295624 out.go:179] * Using Docker driver with root privileges
	I1121 14:41:22.131205  295624 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1121 14:41:22.131238  295624 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1121 14:41:22.131318  295624 start.go:353] cluster config:
	{Name:custom-flannel-989875 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-989875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:41:22.132644  295624 out.go:179] * Starting "custom-flannel-989875" primary control-plane node in "custom-flannel-989875" cluster
	I1121 14:41:22.133721  295624 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:41:22.134812  295624 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:41:22.135908  295624 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:41:22.135957  295624 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1121 14:41:22.135970  295624 cache.go:65] Caching tarball of preloaded images
	I1121 14:41:22.136008  295624 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:41:22.136061  295624 preload.go:238] Found /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1121 14:41:22.136076  295624 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 14:41:22.136174  295624 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/custom-flannel-989875/config.json ...
	I1121 14:41:22.136201  295624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/custom-flannel-989875/config.json: {Name:mkc986c15e31cbef5ae8121b6432a3e8c8e9f79f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:41:22.159805  295624 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:41:22.159830  295624 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:41:22.159851  295624 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:41:22.159880  295624 start.go:360] acquireMachinesLock for custom-flannel-989875: {Name:mk09c79eddd260bda08b37dede55a6d1716759a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:41:22.159987  295624 start.go:364] duration metric: took 87.07µs to acquireMachinesLock for "custom-flannel-989875"
	I1121 14:41:22.160016  295624 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-989875 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-989875 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:41:22.160106  295624 start.go:125] createHost starting for "" (driver="docker")
	I1121 14:41:22.129794  287149 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502161516s
	I1121 14:41:22.141387  287149 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:41:22.152086  287149 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:41:22.160906  287149 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:41:22.161171  287149 kubeadm.go:319] [mark-control-plane] Marking the node calico-989875 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:41:22.173404  287149 kubeadm.go:319] [bootstrap-token] Using token: bljheb.zs0j0m4qrffxznpb
	I1121 14:41:22.175254  286424 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 14:41:22.175350  286424 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 14:41:22.175460  286424 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 14:41:22.175534  286424 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 14:41:22.175596  286424 kubeadm.go:319] OS: Linux
	I1121 14:41:22.175649  286424 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 14:41:22.175723  286424 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 14:41:22.175782  286424 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 14:41:22.175841  286424 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 14:41:22.175903  286424 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 14:41:22.175963  286424 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 14:41:22.176018  286424 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 14:41:22.176061  286424 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 14:41:22.176145  286424 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 14:41:22.176333  286424 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 14:41:22.176664  286424 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 14:41:22.176778  286424 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 14:41:22.177890  286424 out.go:252]   - Generating certificates and keys ...
	I1121 14:41:22.177984  286424 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:41:22.178084  286424 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:41:22.178281  286424 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:41:22.178350  286424 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:41:22.178434  286424 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:41:22.178519  286424 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:41:22.178610  286424 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:41:22.178739  286424 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-989875 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:41:22.178808  286424 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:41:22.178967  286424 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-989875 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1121 14:41:22.179059  286424 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:41:22.179150  286424 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:41:22.179212  286424 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:41:22.179288  286424 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:41:22.179368  286424 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:41:22.179469  286424 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:41:22.179545  286424 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:41:22.179648  286424 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:41:22.179725  286424 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:41:22.179825  286424 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:41:22.179909  286424 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:41:22.181343  286424 out.go:252]   - Booting up control plane ...
	I1121 14:41:22.181482  286424 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:41:22.181604  286424 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:41:22.181695  286424 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:41:22.182030  286424 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:41:22.182197  286424 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:41:22.182337  286424 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:41:22.182451  286424 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:41:22.182522  286424 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:41:22.182824  286424 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:41:22.182970  286424 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:41:22.183047  286424 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001252876s
	I1121 14:41:22.183168  286424 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:41:22.183290  286424 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1121 14:41:22.183406  286424 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:41:22.183513  286424 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:41:22.183644  286424 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.174076686s
	I1121 14:41:22.183742  286424 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.546617647s
	I1121 14:41:22.183848  286424 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001352514s
	I1121 14:41:22.184020  286424 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 14:41:22.184218  286424 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 14:41:22.184303  286424 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 14:41:22.184520  286424 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-989875 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 14:41:22.184625  286424 kubeadm.go:319] [bootstrap-token] Using token: o3mq7e.xziqc69u0qg30i7h
	I1121 14:41:22.185605  286424 out.go:252]   - Configuring RBAC rules ...
	I1121 14:41:22.185739  286424 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 14:41:22.185853  286424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 14:41:22.186055  286424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 14:41:22.186300  286424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 14:41:22.186522  286424 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 14:41:22.186670  286424 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 14:41:22.186840  286424 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 14:41:22.186907  286424 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 14:41:22.186961  286424 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:41:22.186972  286424 kubeadm.go:319] 
	I1121 14:41:22.187041  286424 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:41:22.187053  286424 kubeadm.go:319] 
	I1121 14:41:22.187148  286424 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:41:22.187162  286424 kubeadm.go:319] 
	I1121 14:41:22.187192  286424 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:41:22.187260  286424 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:41:22.187325  286424 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:41:22.187338  286424 kubeadm.go:319] 
	I1121 14:41:22.187412  286424 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:41:22.187427  286424 kubeadm.go:319] 
	I1121 14:41:22.187494  286424 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:41:22.187503  286424 kubeadm.go:319] 
	I1121 14:41:22.187580  286424 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:41:22.187676  286424 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:41:22.187768  286424 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:41:22.187781  286424 kubeadm.go:319] 
	I1121 14:41:22.187914  286424 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:41:22.188061  286424 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:41:22.188073  286424 kubeadm.go:319] 
	I1121 14:41:22.188181  286424 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token o3mq7e.xziqc69u0qg30i7h \
	I1121 14:41:22.188334  286424 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f61f1a5a9a2c6e402420e419bcf82211dd9cf42c2d71b101000a986289f66d60 \
	I1121 14:41:22.188361  286424 kubeadm.go:319] 	--control-plane 
	I1121 14:41:22.188366  286424 kubeadm.go:319] 
	I1121 14:41:22.188501  286424 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:41:22.188518  286424 kubeadm.go:319] 
	I1121 14:41:22.188626  286424 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token o3mq7e.xziqc69u0qg30i7h \
	I1121 14:41:22.188775  286424 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f61f1a5a9a2c6e402420e419bcf82211dd9cf42c2d71b101000a986289f66d60 
	I1121 14:41:22.188792  286424 cni.go:84] Creating CNI manager for "kindnet"
	I1121 14:41:22.190512  286424 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Nov 21 14:41:11 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:11.261753002Z" level=info msg="Starting container: 0c137b028b659639189d55ff1b62b29e7b9276006225833f92eabcb3cd9ad494" id=603ca2dc-7d92-4066-ae00-ba83ad474396 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:41:11 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:11.263688745Z" level=info msg="Started container" PID=1857 containerID=0c137b028b659639189d55ff1b62b29e7b9276006225833f92eabcb3cd9ad494 description=kube-system/coredns-66bc5c9577-wq9lw/coredns id=603ca2dc-7d92-4066-ae00-ba83ad474396 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7d95cd22b5fd2815c81b97e0dd48711b5618456f63c66f37691fd7f8c7789fa9
	Nov 21 14:41:13 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:13.61575445Z" level=info msg="Running pod sandbox: default/busybox/POD" id=90c546b5-2922-462a-af86-d7238a07ba15 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:41:13 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:13.615831471Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:41:13 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:13.622321603Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7dcb8930134c740868d85a230db37042b6bafaf50e0c6b819fefe49f7ae3cd41 UID:efb20c28-6dae-485c-8d5b-dad4254c5f4a NetNS:/var/run/netns/e7926f71-7771-4151-830f-d5d8dcaa5272 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000600118}] Aliases:map[]}"
	Nov 21 14:41:13 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:13.622404543Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 21 14:41:13 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:13.632573791Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7dcb8930134c740868d85a230db37042b6bafaf50e0c6b819fefe49f7ae3cd41 UID:efb20c28-6dae-485c-8d5b-dad4254c5f4a NetNS:/var/run/netns/e7926f71-7771-4151-830f-d5d8dcaa5272 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000600118}] Aliases:map[]}"
	Nov 21 14:41:13 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:13.632731107Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 21 14:41:13 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:13.633631381Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 21 14:41:13 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:13.634666184Z" level=info msg="Ran pod sandbox 7dcb8930134c740868d85a230db37042b6bafaf50e0c6b819fefe49f7ae3cd41 with infra container: default/busybox/POD" id=90c546b5-2922-462a-af86-d7238a07ba15 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 14:41:13 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:13.635849941Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=df4f1005-1fde-4e0b-920e-b80435c5c97c name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:41:13 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:13.635978778Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=df4f1005-1fde-4e0b-920e-b80435c5c97c name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:41:13 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:13.636018682Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=df4f1005-1fde-4e0b-920e-b80435c5c97c name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:41:13 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:13.636775468Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b111fb81-b718-421c-bb31-a81394875d0a name=/runtime.v1.ImageService/PullImage
	Nov 21 14:41:13 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:13.638421409Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 14:41:14 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:14.358653129Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=b111fb81-b718-421c-bb31-a81394875d0a name=/runtime.v1.ImageService/PullImage
	Nov 21 14:41:14 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:14.35925212Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b02f1b60-5525-42af-8684-5bf06b603af0 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:41:14 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:14.360572787Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e250c3fc-ade6-4748-bd23-edb5bf7243d3 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 14:41:14 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:14.363812688Z" level=info msg="Creating container: default/busybox/busybox" id=ded5abdd-aa04-4d68-a7ed-bbd95db77872 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:41:14 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:14.363933261Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:41:14 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:14.36836525Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:41:14 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:14.368908003Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:41:14 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:14.397622358Z" level=info msg="Created container b39fca43fc1b26cb25486ceca846af04142feb961a50c0a3ca46d43d8a07cd86: default/busybox/busybox" id=ded5abdd-aa04-4d68-a7ed-bbd95db77872 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:41:14 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:14.398403613Z" level=info msg="Starting container: b39fca43fc1b26cb25486ceca846af04142feb961a50c0a3ca46d43d8a07cd86" id=6fcf4f0b-4405-4b50-adb0-b71319bb7d34 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:41:14 default-k8s-diff-port-859276 crio[773]: time="2025-11-21T14:41:14.400234004Z" level=info msg="Started container" PID=1934 containerID=b39fca43fc1b26cb25486ceca846af04142feb961a50c0a3ca46d43d8a07cd86 description=default/busybox/busybox id=6fcf4f0b-4405-4b50-adb0-b71319bb7d34 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7dcb8930134c740868d85a230db37042b6bafaf50e0c6b819fefe49f7ae3cd41
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	b39fca43fc1b2       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago        Running             busybox                   0                   7dcb8930134c7       busybox                                                default
	0c137b028b659       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago       Running             coredns                   0                   7d95cd22b5fd2       coredns-66bc5c9577-wq9lw                               kube-system
	4a8e3288277b8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago       Running             storage-provisioner       0                   84ffe85f6ad0e       storage-provisioner                                    kube-system
	cd9ff52105b07       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      52 seconds ago       Running             kindnet-cni               0                   8629262892cc9       kindnet-28knv                                          kube-system
	ed301f8b49329       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      52 seconds ago       Running             kube-proxy                0                   387d30db3e886       kube-proxy-vwzb2                                       kube-system
	be8d19c24c625       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      About a minute ago   Running             kube-controller-manager   0                   b8cbe7de4c967       kube-controller-manager-default-k8s-diff-port-859276   kube-system
	8e364bfc69110       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      About a minute ago   Running             etcd                      0                   1bf501953ac5b       etcd-default-k8s-diff-port-859276                      kube-system
	c810c267ac119       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      About a minute ago   Running             kube-apiserver            0                   a15f505055b33       kube-apiserver-default-k8s-diff-port-859276            kube-system
	072365425fa46       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      About a minute ago   Running             kube-scheduler            0                   eccec17ebc3b7       kube-scheduler-default-k8s-diff-port-859276            kube-system
	
	
	==> coredns [0c137b028b659639189d55ff1b62b29e7b9276006225833f92eabcb3cd9ad494] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41454 - 43774 "HINFO IN 7395698497519211178.5246893381646327663. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.504329259s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-859276
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-859276
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=default-k8s-diff-port-859276
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_40_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:40:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-859276
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:41:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:41:10 +0000   Fri, 21 Nov 2025 14:40:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:41:10 +0000   Fri, 21 Nov 2025 14:40:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:41:10 +0000   Fri, 21 Nov 2025 14:40:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:41:10 +0000   Fri, 21 Nov 2025 14:41:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-859276
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                343bd2de-0163-43e5-a948-02f67c21f6df
	  Boot ID:                    4deed74e-d1ae-403f-b303-7338c681df31
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-wq9lw                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     54s
	  kube-system                 etcd-default-k8s-diff-port-859276                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         60s
	  kube-system                 kindnet-28knv                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-default-k8s-diff-port-859276             250m (3%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-859276    200m (2%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-vwzb2                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-default-k8s-diff-port-859276             100m (1%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 52s                kube-proxy       
	  Normal  Starting                 65s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)  kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)  kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x8 over 65s)  kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasSufficientPID
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s                kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s                kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s                kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                node-controller  Node default-k8s-diff-port-859276 event: Registered Node default-k8s-diff-port-859276 in Controller
	  Normal  NodeReady                13s                kubelet          Node default-k8s-diff-port-859276 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.087005] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024870] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.407014] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 13:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.052324] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023964] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +2.047715] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +4.031557] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +8.511113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[ +16.382337] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[Nov21 13:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	
	
	==> etcd [8e364bfc691100e7f6b7b56379712d397d1550aec95def2d05c6c5089c066b74] <==
	{"level":"warn","ts":"2025-11-21T14:40:20.762133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.770168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.778539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.788252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.795895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.805777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.813048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.822156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.834793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.843124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.851467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.862084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.872191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.881929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.891545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.903071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.918029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.928235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.936751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.945057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.968101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.976005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:20.983390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:40:21.033623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50562","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-21T14:41:03.252911Z","caller":"traceutil/trace.go:172","msg":"trace[147464307] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"113.947682ms","start":"2025-11-21T14:41:03.138944Z","end":"2025-11-21T14:41:03.252892Z","steps":["trace[147464307] 'process raft request'  (duration: 113.819866ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:41:23 up  1:23,  0 user,  load average: 5.77, 3.45, 2.11
	Linux default-k8s-diff-port-859276 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cd9ff52105b072395249d69c6d2d3fc9a76ec06d38e0d7b26252a5cba85c8c79] <==
	I1121 14:40:30.451119       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:40:30.455288       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1121 14:40:30.455491       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:40:30.455523       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:40:30.455547       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:40:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:40:30.753022       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:40:30.753053       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:40:30.753064       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:40:30.753201       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 14:41:00.753479       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1121 14:41:00.753498       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 14:41:00.753479       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1121 14:41:00.753678       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1121 14:41:01.953196       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:41:01.953235       1 metrics.go:72] Registering metrics
	I1121 14:41:01.953314       1 controller.go:711] "Syncing nftables rules"
	I1121 14:41:10.752885       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 14:41:10.752926       1 main.go:301] handling current node
	I1121 14:41:20.752493       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 14:41:20.752544       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c810c267ac119281dd430c7867f6a603419c6444040576d724d55602d73114fc] <==
	I1121 14:40:21.789293       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1121 14:40:21.787953       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 14:40:21.805275       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1121 14:40:21.806587       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:40:21.818729       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:40:21.819216       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:40:21.965862       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 14:40:22.590355       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1121 14:40:22.595898       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1121 14:40:22.595920       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:40:23.290023       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:40:23.329984       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:40:23.396777       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1121 14:40:23.403105       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1121 14:40:23.404053       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:40:23.407941       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:40:23.672432       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:40:24.285038       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:40:24.295383       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1121 14:40:24.304230       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 14:40:28.728538       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:40:29.626402       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1121 14:40:29.678708       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1121 14:40:29.686291       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1121 14:41:21.434292       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:43292: use of closed network connection
	
	
	==> kube-controller-manager [be8d19c24c625b23511f2fffc08cd343f851c7d63209f8a0295ee9efffefda0a] <==
	I1121 14:40:28.645163       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1121 14:40:28.649871       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 14:40:28.655377       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-859276" podCIDRs=["10.244.0.0/24"]
	I1121 14:40:28.671386       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1121 14:40:28.671518       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1121 14:40:28.671657       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1121 14:40:28.671706       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 14:40:28.671981       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 14:40:28.672793       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 14:40:28.672933       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1121 14:40:28.673131       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 14:40:28.673769       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 14:40:28.673804       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1121 14:40:28.675400       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 14:40:28.675775       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 14:40:28.675895       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 14:40:28.676133       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:40:28.675907       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 14:40:28.676423       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:40:28.676555       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 14:40:28.681766       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 14:40:28.687412       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 14:40:28.687973       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:40:28.710864       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:41:13.639002       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ed301f8b4932914234147d3a642326b03ac0dc2f961d084f686002e068a10d50] <==
	I1121 14:40:30.299346       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:40:30.407251       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:40:30.508223       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:40:30.508264       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1121 14:40:30.508372       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:40:30.542158       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:40:30.542394       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:40:30.549349       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:40:30.549885       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:40:30.550126       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:40:30.553059       1 config.go:200] "Starting service config controller"
	I1121 14:40:30.554673       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:40:30.553460       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:40:30.554709       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:40:30.553482       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:40:30.554723       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:40:30.554028       1 config.go:309] "Starting node config controller"
	I1121 14:40:30.554738       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:40:30.554745       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:40:30.655224       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:40:30.655319       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:40:30.655365       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [072365425fa46c7791032efd4ea17dbc8102ca226cbe0a29fe73f5cb608a26cb] <==
	E1121 14:40:21.716781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:40:21.719827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:40:21.719846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:40:21.719957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 14:40:21.720040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:40:21.720129       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 14:40:21.720344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 14:40:21.720421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 14:40:21.720489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 14:40:21.720555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 14:40:21.720673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:40:21.721040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 14:40:21.721468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:40:21.721633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 14:40:22.642724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 14:40:22.650916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 14:40:22.717867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 14:40:22.757140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 14:40:22.807234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 14:40:22.951641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 14:40:22.984526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 14:40:23.053030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 14:40:23.105151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 14:40:23.148348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1121 14:40:25.513336       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:40:25 default-k8s-diff-port-859276 kubelet[1334]: E1121 14:40:25.238647    1334 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-859276\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-859276"
	Nov 21 14:40:25 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:40:25.271792    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-859276" podStartSLOduration=1.27176945 podStartE2EDuration="1.27176945s" podCreationTimestamp="2025-11-21 14:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:40:25.257500424 +0000 UTC m=+1.164593917" watchObservedRunningTime="2025-11-21 14:40:25.27176945 +0000 UTC m=+1.178862940"
	Nov 21 14:40:25 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:40:25.271929    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-859276" podStartSLOduration=2.271920869 podStartE2EDuration="2.271920869s" podCreationTimestamp="2025-11-21 14:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:40:25.271873146 +0000 UTC m=+1.178966644" watchObservedRunningTime="2025-11-21 14:40:25.271920869 +0000 UTC m=+1.179014361"
	Nov 21 14:40:25 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:40:25.281758    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-859276" podStartSLOduration=1.281744827 podStartE2EDuration="1.281744827s" podCreationTimestamp="2025-11-21 14:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:40:25.281645078 +0000 UTC m=+1.188738573" watchObservedRunningTime="2025-11-21 14:40:25.281744827 +0000 UTC m=+1.188838320"
	Nov 21 14:40:25 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:40:25.290638    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-859276" podStartSLOduration=1.290618636 podStartE2EDuration="1.290618636s" podCreationTimestamp="2025-11-21 14:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:40:25.290552695 +0000 UTC m=+1.197646188" watchObservedRunningTime="2025-11-21 14:40:25.290618636 +0000 UTC m=+1.197712129"
	Nov 21 14:40:28 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:40:28.750100    1334 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 21 14:40:28 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:40:28.753239    1334 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 21 14:40:29 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:40:29.718430    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0b51b135-fa9e-48b7-a433-e0863a9fd18f-cni-cfg\") pod \"kindnet-28knv\" (UID: \"0b51b135-fa9e-48b7-a433-e0863a9fd18f\") " pod="kube-system/kindnet-28knv"
	Nov 21 14:40:29 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:40:29.718504    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/222915e5-19b3-40b3-95c7-8ae57fb7b570-lib-modules\") pod \"kube-proxy-vwzb2\" (UID: \"222915e5-19b3-40b3-95c7-8ae57fb7b570\") " pod="kube-system/kube-proxy-vwzb2"
	Nov 21 14:40:29 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:40:29.718537    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4pxd\" (UniqueName: \"kubernetes.io/projected/0b51b135-fa9e-48b7-a433-e0863a9fd18f-kube-api-access-m4pxd\") pod \"kindnet-28knv\" (UID: \"0b51b135-fa9e-48b7-a433-e0863a9fd18f\") " pod="kube-system/kindnet-28knv"
	Nov 21 14:40:29 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:40:29.718580    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/222915e5-19b3-40b3-95c7-8ae57fb7b570-kube-proxy\") pod \"kube-proxy-vwzb2\" (UID: \"222915e5-19b3-40b3-95c7-8ae57fb7b570\") " pod="kube-system/kube-proxy-vwzb2"
	Nov 21 14:40:29 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:40:29.718603    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/222915e5-19b3-40b3-95c7-8ae57fb7b570-xtables-lock\") pod \"kube-proxy-vwzb2\" (UID: \"222915e5-19b3-40b3-95c7-8ae57fb7b570\") " pod="kube-system/kube-proxy-vwzb2"
	Nov 21 14:40:29 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:40:29.718636    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b51b135-fa9e-48b7-a433-e0863a9fd18f-xtables-lock\") pod \"kindnet-28knv\" (UID: \"0b51b135-fa9e-48b7-a433-e0863a9fd18f\") " pod="kube-system/kindnet-28knv"
	Nov 21 14:40:29 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:40:29.718654    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b51b135-fa9e-48b7-a433-e0863a9fd18f-lib-modules\") pod \"kindnet-28knv\" (UID: \"0b51b135-fa9e-48b7-a433-e0863a9fd18f\") " pod="kube-system/kindnet-28knv"
	Nov 21 14:40:29 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:40:29.718674    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4w7t\" (UniqueName: \"kubernetes.io/projected/222915e5-19b3-40b3-95c7-8ae57fb7b570-kube-api-access-j4w7t\") pod \"kube-proxy-vwzb2\" (UID: \"222915e5-19b3-40b3-95c7-8ae57fb7b570\") " pod="kube-system/kube-proxy-vwzb2"
	Nov 21 14:40:30 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:40:30.271178    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-28knv" podStartSLOduration=1.271157275 podStartE2EDuration="1.271157275s" podCreationTimestamp="2025-11-21 14:40:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:40:30.270842982 +0000 UTC m=+6.177936475" watchObservedRunningTime="2025-11-21 14:40:30.271157275 +0000 UTC m=+6.178250772"
	Nov 21 14:40:30 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:40:30.293752    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vwzb2" podStartSLOduration=1.29372724 podStartE2EDuration="1.29372724s" podCreationTimestamp="2025-11-21 14:40:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:40:30.293327721 +0000 UTC m=+6.200421237" watchObservedRunningTime="2025-11-21 14:40:30.29372724 +0000 UTC m=+6.200820734"
	Nov 21 14:41:10 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:41:10.876011    1334 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 21 14:41:11 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:41:11.012634    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8c0f1b1b-a75b-41d9-ab3b-3295cef1b094-tmp\") pod \"storage-provisioner\" (UID: \"8c0f1b1b-a75b-41d9-ab3b-3295cef1b094\") " pod="kube-system/storage-provisioner"
	Nov 21 14:41:11 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:41:11.012690    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrxw5\" (UniqueName: \"kubernetes.io/projected/8c0f1b1b-a75b-41d9-ab3b-3295cef1b094-kube-api-access-mrxw5\") pod \"storage-provisioner\" (UID: \"8c0f1b1b-a75b-41d9-ab3b-3295cef1b094\") " pod="kube-system/storage-provisioner"
	Nov 21 14:41:11 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:41:11.012728    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d3dff94-2994-4ee3-a958-eb7fe0cd00d0-config-volume\") pod \"coredns-66bc5c9577-wq9lw\" (UID: \"1d3dff94-2994-4ee3-a958-eb7fe0cd00d0\") " pod="kube-system/coredns-66bc5c9577-wq9lw"
	Nov 21 14:41:11 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:41:11.012753    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxs57\" (UniqueName: \"kubernetes.io/projected/1d3dff94-2994-4ee3-a958-eb7fe0cd00d0-kube-api-access-zxs57\") pod \"coredns-66bc5c9577-wq9lw\" (UID: \"1d3dff94-2994-4ee3-a958-eb7fe0cd00d0\") " pod="kube-system/coredns-66bc5c9577-wq9lw"
	Nov 21 14:41:11 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:41:11.342823    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wq9lw" podStartSLOduration=42.342800904 podStartE2EDuration="42.342800904s" podCreationTimestamp="2025-11-21 14:40:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:41:11.342348448 +0000 UTC m=+47.249441941" watchObservedRunningTime="2025-11-21 14:41:11.342800904 +0000 UTC m=+47.249894398"
	Nov 21 14:41:13 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:41:13.309657    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=43.309627646 podStartE2EDuration="43.309627646s" podCreationTimestamp="2025-11-21 14:40:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:41:11.36853303 +0000 UTC m=+47.275626522" watchObservedRunningTime="2025-11-21 14:41:13.309627646 +0000 UTC m=+49.216721139"
	Nov 21 14:41:13 default-k8s-diff-port-859276 kubelet[1334]: I1121 14:41:13.429723    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk2pv\" (UniqueName: \"kubernetes.io/projected/efb20c28-6dae-485c-8d5b-dad4254c5f4a-kube-api-access-nk2pv\") pod \"busybox\" (UID: \"efb20c28-6dae-485c-8d5b-dad4254c5f4a\") " pod="default/busybox"
	
	
	==> storage-provisioner [4a8e3288277b868f91f602a494d467c638108e9905d8ecdf6b87a53299927ec4] <==
	I1121 14:41:11.270680       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:41:11.278866       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:41:11.278970       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 14:41:11.280958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:41:11.288202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:41:11.288499       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:41:11.288886       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"527dbf43-5986-4266-9001-2722967aec7b", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-859276_9e9f3875-7b76-447b-9abf-5f5f76f41984 became leader
	I1121 14:41:11.288953       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-859276_9e9f3875-7b76-447b-9abf-5f5f76f41984!
	W1121 14:41:11.293121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:41:11.297221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:41:11.389666       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-859276_9e9f3875-7b76-447b-9abf-5f5f76f41984!
	W1121 14:41:13.300684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:41:13.306756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:41:15.310853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:41:15.315308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:41:17.318639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:41:17.322764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:41:19.326581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:41:19.331442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:41:21.336084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:41:21.342910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:41:23.346775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:41:23.350659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-859276 -n default-k8s-diff-port-859276
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-859276 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (8.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-859276 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-859276 --alsologtostderr -v=1: exit status 80 (2.185923623s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-859276 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:42:42.213448  323275 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:42:42.213640  323275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:42:42.213649  323275 out.go:374] Setting ErrFile to fd 2...
	I1121 14:42:42.213656  323275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:42:42.213911  323275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:42:42.214201  323275 out.go:368] Setting JSON to false
	I1121 14:42:42.214233  323275 mustload.go:66] Loading cluster: default-k8s-diff-port-859276
	I1121 14:42:42.214735  323275 config.go:182] Loaded profile config "default-k8s-diff-port-859276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:42:42.215203  323275 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-859276 --format={{.State.Status}}
	I1121 14:42:42.239081  323275 host.go:66] Checking if "default-k8s-diff-port-859276" exists ...
	I1121 14:42:42.239400  323275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:42:42.311293  323275 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-21 14:42:42.299511583 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:42:42.312132  323275 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-859276 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1121 14:42:42.322059  323275 out.go:179] * Pausing node default-k8s-diff-port-859276 ... 
	I1121 14:42:42.323153  323275 host.go:66] Checking if "default-k8s-diff-port-859276" exists ...
	I1121 14:42:42.323415  323275 ssh_runner.go:195] Run: systemctl --version
	I1121 14:42:42.323469  323275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-859276
	I1121 14:42:42.344612  323275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/default-k8s-diff-port-859276/id_rsa Username:docker}
	I1121 14:42:42.452965  323275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:42:42.468054  323275 pause.go:52] kubelet running: true
	I1121 14:42:42.468123  323275 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:42:42.660695  323275 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:42:42.660818  323275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:42:42.741377  323275 cri.go:89] found id: "d52774abd97b7c8a03b60d341da70eead219e69b9d2e38977c51930994e08e91"
	I1121 14:42:42.741399  323275 cri.go:89] found id: "6275d647f5a78c60d9721c98980ddc7c05db78ff9c3966dde958a62fc27e8523"
	I1121 14:42:42.741403  323275 cri.go:89] found id: "7bc1381c40424bf5c5ec170863470ce4dfb22c650dc079426153a476b7dc54fb"
	I1121 14:42:42.741407  323275 cri.go:89] found id: "f246e7bcac61f24e7f830a241a3325cbae840e2fce953f88ae1973715ca45fe4"
	I1121 14:42:42.741410  323275 cri.go:89] found id: "8b13c1531e7f2a6ba3e16cf97a2bc956ef7e797fdaf90f7a395984f6dcd0c7ab"
	I1121 14:42:42.741413  323275 cri.go:89] found id: "c119f5ad65c9c3667299aef384e5841ae27efc53c987bcbfb515ef1c25aa3b04"
	I1121 14:42:42.741416  323275 cri.go:89] found id: "978c3213baf15d70be19ba61ca307b752fef7c010cd37640b6d37cbac117cab0"
	I1121 14:42:42.741419  323275 cri.go:89] found id: "bec7da40a6663ecc1f4b7e19ba64af8747dad2862c844d662dec96aedce65617"
	I1121 14:42:42.741421  323275 cri.go:89] found id: "46730d3f8950dfc39160a83403cde77d49652dbc3d617ab6dc02db67defa9031"
	I1121 14:42:42.741429  323275 cri.go:89] found id: "792ec4dce17b4ab7790240101a6b580c76469012712ad8190898f09de8430e58"
	I1121 14:42:42.741432  323275 cri.go:89] found id: "b4239d7c08405352938813c62cb6f15113973121a11d38199bd1bf0c93a47049"
	I1121 14:42:42.741435  323275 cri.go:89] found id: ""
	I1121 14:42:42.741473  323275 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:42:42.754542  323275 retry.go:31] will retry after 206.043574ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:42:42Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:42:42.961023  323275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:42:42.975076  323275 pause.go:52] kubelet running: false
	I1121 14:42:42.975158  323275 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:42:43.159159  323275 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:42:43.159248  323275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:42:43.232379  323275 cri.go:89] found id: "d52774abd97b7c8a03b60d341da70eead219e69b9d2e38977c51930994e08e91"
	I1121 14:42:43.232407  323275 cri.go:89] found id: "6275d647f5a78c60d9721c98980ddc7c05db78ff9c3966dde958a62fc27e8523"
	I1121 14:42:43.232428  323275 cri.go:89] found id: "7bc1381c40424bf5c5ec170863470ce4dfb22c650dc079426153a476b7dc54fb"
	I1121 14:42:43.232433  323275 cri.go:89] found id: "f246e7bcac61f24e7f830a241a3325cbae840e2fce953f88ae1973715ca45fe4"
	I1121 14:42:43.232438  323275 cri.go:89] found id: "8b13c1531e7f2a6ba3e16cf97a2bc956ef7e797fdaf90f7a395984f6dcd0c7ab"
	I1121 14:42:43.232443  323275 cri.go:89] found id: "c119f5ad65c9c3667299aef384e5841ae27efc53c987bcbfb515ef1c25aa3b04"
	I1121 14:42:43.232447  323275 cri.go:89] found id: "978c3213baf15d70be19ba61ca307b752fef7c010cd37640b6d37cbac117cab0"
	I1121 14:42:43.232450  323275 cri.go:89] found id: "bec7da40a6663ecc1f4b7e19ba64af8747dad2862c844d662dec96aedce65617"
	I1121 14:42:43.232454  323275 cri.go:89] found id: "46730d3f8950dfc39160a83403cde77d49652dbc3d617ab6dc02db67defa9031"
	I1121 14:42:43.232463  323275 cri.go:89] found id: "792ec4dce17b4ab7790240101a6b580c76469012712ad8190898f09de8430e58"
	I1121 14:42:43.232471  323275 cri.go:89] found id: "b4239d7c08405352938813c62cb6f15113973121a11d38199bd1bf0c93a47049"
	I1121 14:42:43.232475  323275 cri.go:89] found id: ""
	I1121 14:42:43.232532  323275 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:42:43.245717  323275 retry.go:31] will retry after 191.567176ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:42:43Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:42:43.438156  323275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:42:43.452606  323275 pause.go:52] kubelet running: false
	I1121 14:42:43.452674  323275 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:42:43.620831  323275 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:42:43.620915  323275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:42:43.698886  323275 cri.go:89] found id: "d52774abd97b7c8a03b60d341da70eead219e69b9d2e38977c51930994e08e91"
	I1121 14:42:43.698908  323275 cri.go:89] found id: "6275d647f5a78c60d9721c98980ddc7c05db78ff9c3966dde958a62fc27e8523"
	I1121 14:42:43.698913  323275 cri.go:89] found id: "7bc1381c40424bf5c5ec170863470ce4dfb22c650dc079426153a476b7dc54fb"
	I1121 14:42:43.698918  323275 cri.go:89] found id: "f246e7bcac61f24e7f830a241a3325cbae840e2fce953f88ae1973715ca45fe4"
	I1121 14:42:43.698921  323275 cri.go:89] found id: "8b13c1531e7f2a6ba3e16cf97a2bc956ef7e797fdaf90f7a395984f6dcd0c7ab"
	I1121 14:42:43.698926  323275 cri.go:89] found id: "c119f5ad65c9c3667299aef384e5841ae27efc53c987bcbfb515ef1c25aa3b04"
	I1121 14:42:43.698930  323275 cri.go:89] found id: "978c3213baf15d70be19ba61ca307b752fef7c010cd37640b6d37cbac117cab0"
	I1121 14:42:43.698934  323275 cri.go:89] found id: "bec7da40a6663ecc1f4b7e19ba64af8747dad2862c844d662dec96aedce65617"
	I1121 14:42:43.698938  323275 cri.go:89] found id: "46730d3f8950dfc39160a83403cde77d49652dbc3d617ab6dc02db67defa9031"
	I1121 14:42:43.698946  323275 cri.go:89] found id: "792ec4dce17b4ab7790240101a6b580c76469012712ad8190898f09de8430e58"
	I1121 14:42:43.698950  323275 cri.go:89] found id: "b4239d7c08405352938813c62cb6f15113973121a11d38199bd1bf0c93a47049"
	I1121 14:42:43.698954  323275 cri.go:89] found id: ""
	I1121 14:42:43.699002  323275 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:42:43.712137  323275 retry.go:31] will retry after 330.153135ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:42:43Z" level=error msg="open /run/runc: no such file or directory"
	I1121 14:42:44.042652  323275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:42:44.056472  323275 pause.go:52] kubelet running: false
	I1121 14:42:44.056535  323275 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1121 14:42:44.208140  323275 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1121 14:42:44.208213  323275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1121 14:42:44.283958  323275 cri.go:89] found id: "d52774abd97b7c8a03b60d341da70eead219e69b9d2e38977c51930994e08e91"
	I1121 14:42:44.283988  323275 cri.go:89] found id: "6275d647f5a78c60d9721c98980ddc7c05db78ff9c3966dde958a62fc27e8523"
	I1121 14:42:44.283995  323275 cri.go:89] found id: "7bc1381c40424bf5c5ec170863470ce4dfb22c650dc079426153a476b7dc54fb"
	I1121 14:42:44.284001  323275 cri.go:89] found id: "f246e7bcac61f24e7f830a241a3325cbae840e2fce953f88ae1973715ca45fe4"
	I1121 14:42:44.284005  323275 cri.go:89] found id: "8b13c1531e7f2a6ba3e16cf97a2bc956ef7e797fdaf90f7a395984f6dcd0c7ab"
	I1121 14:42:44.284010  323275 cri.go:89] found id: "c119f5ad65c9c3667299aef384e5841ae27efc53c987bcbfb515ef1c25aa3b04"
	I1121 14:42:44.284014  323275 cri.go:89] found id: "978c3213baf15d70be19ba61ca307b752fef7c010cd37640b6d37cbac117cab0"
	I1121 14:42:44.284018  323275 cri.go:89] found id: "bec7da40a6663ecc1f4b7e19ba64af8747dad2862c844d662dec96aedce65617"
	I1121 14:42:44.284037  323275 cri.go:89] found id: "46730d3f8950dfc39160a83403cde77d49652dbc3d617ab6dc02db67defa9031"
	I1121 14:42:44.284045  323275 cri.go:89] found id: "792ec4dce17b4ab7790240101a6b580c76469012712ad8190898f09de8430e58"
	I1121 14:42:44.284049  323275 cri.go:89] found id: "b4239d7c08405352938813c62cb6f15113973121a11d38199bd1bf0c93a47049"
	I1121 14:42:44.284053  323275 cri.go:89] found id: ""
	I1121 14:42:44.284096  323275 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 14:42:44.302528  323275 out.go:203] 
	W1121 14:42:44.303745  323275 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:42:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:42:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 14:42:44.303765  323275 out.go:285] * 
	* 
	W1121 14:42:44.309159  323275 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 14:42:44.310274  323275 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-859276 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-859276
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-859276:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2d534a2a3b1fe7d6e16c898b8a16a17cbf02051799732b931d600187683c7eac",
	        "Created": "2025-11-21T14:40:05.048409185Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300725,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:41:42.673243364Z",
	            "FinishedAt": "2025-11-21T14:41:41.622503376Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/2d534a2a3b1fe7d6e16c898b8a16a17cbf02051799732b931d600187683c7eac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2d534a2a3b1fe7d6e16c898b8a16a17cbf02051799732b931d600187683c7eac/hostname",
	        "HostsPath": "/var/lib/docker/containers/2d534a2a3b1fe7d6e16c898b8a16a17cbf02051799732b931d600187683c7eac/hosts",
	        "LogPath": "/var/lib/docker/containers/2d534a2a3b1fe7d6e16c898b8a16a17cbf02051799732b931d600187683c7eac/2d534a2a3b1fe7d6e16c898b8a16a17cbf02051799732b931d600187683c7eac-json.log",
	        "Name": "/default-k8s-diff-port-859276",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-859276:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-859276",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2d534a2a3b1fe7d6e16c898b8a16a17cbf02051799732b931d600187683c7eac",
	                "LowerDir": "/var/lib/docker/overlay2/37ea76c5bad6d11bb4dd4e337b8935aab81c4bc411039c222a9d057b3c4b3202-init/diff:/var/lib/docker/overlay2/52a35d389e2d282a009a54c52e3f2c3f22a4d7ab5a2644a29d16127d21682576/diff",
	                "MergedDir": "/var/lib/docker/overlay2/37ea76c5bad6d11bb4dd4e337b8935aab81c4bc411039c222a9d057b3c4b3202/merged",
	                "UpperDir": "/var/lib/docker/overlay2/37ea76c5bad6d11bb4dd4e337b8935aab81c4bc411039c222a9d057b3c4b3202/diff",
	                "WorkDir": "/var/lib/docker/overlay2/37ea76c5bad6d11bb4dd4e337b8935aab81c4bc411039c222a9d057b3c4b3202/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-859276",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-859276/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-859276",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-859276",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-859276",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5d74d31ab700c5fab742a714db55c5616360619eba3f6a1ba886fb12f0b38cc4",
	            "SandboxKey": "/var/run/docker/netns/5d74d31ab700",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-859276": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "160a2921f00f660205c7789d2cbe27b525c000c5d85520fd19733f7917bfd7fd",
	                    "EndpointID": "c33e9ca87d54f1c1251daed15c8e67c9a56b12158d88d405240c25b97e6e25b0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "a2:8d:9e:f6:45:6a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-859276",
	                        "2d534a2a3b1f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-859276 -n default-k8s-diff-port-859276
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-859276 -n default-k8s-diff-port-859276: exit status 2 (326.204667ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-859276 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-859276 logs -n 25: (3.117589188s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                  │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-989875 sudo journalctl -xeu kubelet --all --full --no-pager                                                          │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo cat /etc/kubernetes/kubelet.conf                                                                         │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo cat /var/lib/kubelet/config.yaml                                                                         │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo systemctl status docker --all --full --no-pager                                                          │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p custom-flannel-989875 sudo systemctl cat docker --no-pager                                                                          │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo cat /etc/docker/daemon.json                                                                              │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p custom-flannel-989875 sudo docker system info                                                                                       │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p custom-flannel-989875 sudo systemctl status cri-docker --all --full --no-pager                                                      │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p custom-flannel-989875 sudo systemctl cat cri-docker --no-pager                                                                      │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                 │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p custom-flannel-989875 sudo cat /usr/lib/systemd/system/cri-docker.service                                                           │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo cri-dockerd --version                                                                                    │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo systemctl status containerd --all --full --no-pager                                                      │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p custom-flannel-989875 sudo systemctl cat containerd --no-pager                                                                      │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo cat /lib/systemd/system/containerd.service                                                               │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo cat /etc/containerd/config.toml                                                                          │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo containerd config dump                                                                                   │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo systemctl status crio --all --full --no-pager                                                            │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo systemctl cat crio --no-pager                                                                            │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                  │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo crio config                                                                                              │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ delete  │ -p custom-flannel-989875                                                                                                               │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ image   │ default-k8s-diff-port-859276 image list --format=json                                                                                  │ default-k8s-diff-port-859276 │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ start   │ -p bridge-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio │ bridge-989875                │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ pause   │ -p default-k8s-diff-port-859276 --alsologtostderr -v=1                                                                                 │ default-k8s-diff-port-859276 │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:42:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:42:42.017035  323150 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:42:42.017172  323150 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:42:42.017184  323150 out.go:374] Setting ErrFile to fd 2...
	I1121 14:42:42.017190  323150 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:42:42.017379  323150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:42:42.017885  323150 out.go:368] Setting JSON to false
	I1121 14:42:42.019217  323150 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5111,"bootTime":1763731051,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:42:42.019329  323150 start.go:143] virtualization: kvm guest
	I1121 14:42:42.022501  323150 out.go:179] * [bridge-989875] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:42:42.024235  323150 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:42:42.024266  323150 notify.go:221] Checking for updates...
	I1121 14:42:42.026645  323150 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:42:42.027821  323150 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:42:42.032777  323150 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 14:42:42.033850  323150 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:42:42.035370  323150 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:42:42.099604  310504 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:42:42.100040  310504 kubeadm.go:319] 
	I1121 14:42:42.100134  310504 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:42:42.100140  310504 kubeadm.go:319] 
	I1121 14:42:42.100241  310504 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:42:42.100247  310504 kubeadm.go:319] 
	I1121 14:42:42.100283  310504 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:42:42.100365  310504 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:42:42.100430  310504 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:42:42.100436  310504 kubeadm.go:319] 
	I1121 14:42:42.100504  310504 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:42:42.100510  310504 kubeadm.go:319] 
	I1121 14:42:42.100607  310504 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:42:42.100615  310504 kubeadm.go:319] 
	I1121 14:42:42.100698  310504 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:42:42.100805  310504 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:42:42.100903  310504 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:42:42.100910  310504 kubeadm.go:319] 
	I1121 14:42:42.101009  310504 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:42:42.101104  310504 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:42:42.101110  310504 kubeadm.go:319] 
	I1121 14:42:42.101209  310504 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token k1x4rx.bzruoumyrdb3i6q0 \
	I1121 14:42:42.101332  310504 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f61f1a5a9a2c6e402420e419bcf82211dd9cf42c2d71b101000a986289f66d60 \
	I1121 14:42:42.101357  310504 kubeadm.go:319] 	--control-plane 
	I1121 14:42:42.101363  310504 kubeadm.go:319] 
	I1121 14:42:42.101464  310504 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:42:42.101470  310504 kubeadm.go:319] 
	I1121 14:42:42.101581  310504 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token k1x4rx.bzruoumyrdb3i6q0 \
	I1121 14:42:42.101722  310504 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f61f1a5a9a2c6e402420e419bcf82211dd9cf42c2d71b101000a986289f66d60 
	I1121 14:42:42.105964  310504 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:42:42.106116  310504 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:42:42.106139  310504 cni.go:84] Creating CNI manager for "bridge"
	I1121 14:42:42.107874  310504 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1121 14:42:42.037502  323150 config.go:182] Loaded profile config "default-k8s-diff-port-859276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:42:42.037643  323150 config.go:182] Loaded profile config "enable-default-cni-989875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:42:42.037788  323150 config.go:182] Loaded profile config "flannel-989875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:42:42.037919  323150 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:42:42.063469  323150 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:42:42.063609  323150 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:42:42.136260  323150 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-21 14:42:42.122256423 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:42:42.136411  323150 docker.go:319] overlay module found
	I1121 14:42:42.138109  323150 out.go:179] * Using the docker driver based on user configuration
	I1121 14:42:42.139445  323150 start.go:309] selected driver: docker
	I1121 14:42:42.139456  323150 start.go:930] validating driver "docker" against <nil>
	I1121 14:42:42.139469  323150 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:42:42.140677  323150 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:42:42.222292  323150 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-21 14:42:42.210241773 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:42:42.222497  323150 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 14:42:42.222929  323150 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:42:42.224407  323150 out.go:179] * Using Docker driver with root privileges
	I1121 14:42:42.227409  323150 cni.go:84] Creating CNI manager for "bridge"
	I1121 14:42:42.227438  323150 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1121 14:42:42.227515  323150 start.go:353] cluster config:
	{Name:bridge-989875 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:bridge-989875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:42:42.228938  323150 out.go:179] * Starting "bridge-989875" primary control-plane node in "bridge-989875" cluster
	I1121 14:42:42.230343  323150 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:42:42.231822  323150 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:42:42.233326  323150 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:42:42.233365  323150 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1121 14:42:42.233375  323150 cache.go:65] Caching tarball of preloaded images
	I1121 14:42:42.233469  323150 preload.go:238] Found /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1121 14:42:42.233480  323150 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 14:42:42.233626  323150 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/bridge-989875/config.json ...
	I1121 14:42:42.233652  323150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/bridge-989875/config.json: {Name:mk930f7e855eb7acbedfd0b5d29db36ede4ab530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:42:42.233795  323150 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:42:42.264057  323150 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:42:42.264082  323150 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:42:42.264101  323150 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:42:42.264134  323150 start.go:360] acquireMachinesLock for bridge-989875: {Name:mk2d0c076b6dcc60f8c2dc133df0fce473032530 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:42:42.264243  323150 start.go:364] duration metric: took 89.139µs to acquireMachinesLock for "bridge-989875"
	I1121 14:42:42.264312  323150 start.go:93] Provisioning new machine with config: &{Name:bridge-989875 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:bridge-989875 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:42:42.264442  323150 start.go:125] createHost starting for "" (driver="docker")
	I1121 14:42:40.221632  316581 out.go:252]   - Generating certificates and keys ...
	I1121 14:42:40.221741  316581 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:42:40.221838  316581 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:42:40.825104  316581 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:42:41.022919  316581 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:42:41.600308  316581 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:42:42.051949  316581 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:42:42.393607  316581 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:42:42.393768  316581 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [flannel-989875 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1121 14:42:42.878175  316581 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:42:42.878364  316581 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [flannel-989875 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1121 14:42:43.153122  316581 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	
	
	==> CRI-O <==
	Nov 21 14:42:26 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:26.611076223Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:42:26 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:26.611737396Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:42:26 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:26.636537164Z" level=info msg="Created container 792ec4dce17b4ab7790240101a6b580c76469012712ad8190898f09de8430e58: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2xhmq/dashboard-metrics-scraper" id=8b931834-a088-4491-b5fa-7704ec101107 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:42:26 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:26.637217562Z" level=info msg="Starting container: 792ec4dce17b4ab7790240101a6b580c76469012712ad8190898f09de8430e58" id=7ac607a2-0450-4366-850f-f7754ba17ae9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:42:26 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:26.639447479Z" level=info msg="Started container" PID=1706 containerID=792ec4dce17b4ab7790240101a6b580c76469012712ad8190898f09de8430e58 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2xhmq/dashboard-metrics-scraper id=7ac607a2-0450-4366-850f-f7754ba17ae9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f93865affc737d128af20ba31e6b1c8936eb856231eb7aadb41f5d2bf14fc837
	Nov 21 14:42:26 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:26.764379761Z" level=info msg="Removing container: c3f566419ca7cc288119280617c9dbea9b26a92ebc90b789a652099c9f91f727" id=c10811e1-4029-4a09-811b-695babd0ffdc name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 14:42:26 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:26.775872157Z" level=info msg="Removed container c3f566419ca7cc288119280617c9dbea9b26a92ebc90b789a652099c9f91f727: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2xhmq/dashboard-metrics-scraper" id=c10811e1-4029-4a09-811b-695babd0ffdc name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.319264228Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.323884244Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.323914015Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.323937481Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.327635698Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.327663299Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.327683044Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.33111884Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.33113928Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.33115438Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.334413593Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.334433516Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.406300835Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.412212595Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.412385669Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.412429228Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.416830417Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.416856827Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	792ec4dce17b4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   f93865affc737       dashboard-metrics-scraper-6ffb444bf9-2xhmq             kubernetes-dashboard
	d52774abd97b7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   35f6ec0c62555       storage-provisioner                                    kube-system
	b4239d7c08405       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   941e23f361748       kubernetes-dashboard-855c9754f9-j7rcv                  kubernetes-dashboard
	85f36b2dcfe6f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   8a247c11be3d4       busybox                                                default
	6275d647f5a78       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   666af12636f0e       coredns-66bc5c9577-wq9lw                               kube-system
	7bc1381c40424       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   35f6ec0c62555       storage-provisioner                                    kube-system
	f246e7bcac61f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   a2dde58cc92db       kube-proxy-vwzb2                                       kube-system
	8b13c1531e7f2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   35f18666e4168       kindnet-28knv                                          kube-system
	c119f5ad65c9c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   8218fc154f4ba       etcd-default-k8s-diff-port-859276                      kube-system
	978c3213baf15       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   d210dcb5cf9a4       kube-scheduler-default-k8s-diff-port-859276            kube-system
	bec7da40a6663       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   08460b623962b       kube-controller-manager-default-k8s-diff-port-859276   kube-system
	46730d3f8950d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   6491bcb036cb7       kube-apiserver-default-k8s-diff-port-859276            kube-system
	
	
	==> coredns [6275d647f5a78c60d9721c98980ddc7c05db78ff9c3966dde958a62fc27e8523] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54570 - 64318 "HINFO IN 661135359172939465.2483412663312711059. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.484856595s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-859276
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-859276
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=default-k8s-diff-port-859276
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_40_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:40:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-859276
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:42:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:42:23 +0000   Fri, 21 Nov 2025 14:40:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:42:23 +0000   Fri, 21 Nov 2025 14:40:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:42:23 +0000   Fri, 21 Nov 2025 14:40:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:42:23 +0000   Fri, 21 Nov 2025 14:41:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-859276
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                343bd2de-0163-43e5-a948-02f67c21f6df
	  Boot ID:                    4deed74e-d1ae-403f-b303-7338c681df31
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-wq9lw                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m16s
	  kube-system                 etcd-default-k8s-diff-port-859276                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m22s
	  kube-system                 kindnet-28knv                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m16s
	  kube-system                 kube-apiserver-default-k8s-diff-port-859276             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-859276    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-proxy-vwzb2                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-scheduler-default-k8s-diff-port-859276             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2xhmq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-j7rcv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m15s                  kube-proxy       
	  Normal  Starting                 52s                    kube-proxy       
	  Normal  Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m27s (x8 over 2m27s)  kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m27s (x8 over 2m27s)  kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m27s (x8 over 2m27s)  kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m21s                  kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m21s                  kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m21s                  kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m21s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m17s                  node-controller  Node default-k8s-diff-port-859276 event: Registered Node default-k8s-diff-port-859276 in Controller
	  Normal  NodeReady                95s                    kubelet          Node default-k8s-diff-port-859276 status is now: NodeReady
	  Normal  Starting                 56s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)      kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)      kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)      kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                    node-controller  Node default-k8s-diff-port-859276 event: Registered Node default-k8s-diff-port-859276 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.052324] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023964] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +2.047715] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +4.031557] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +8.511113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[ +16.382337] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[Nov21 13:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[Nov21 14:42] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 89 9f d1 1f 4a 08 06
	[  +0.000384] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 4a d9 e5 66 e3 a7 08 06
	
	
	==> etcd [c119f5ad65c9c3667299aef384e5841ae27efc53c987bcbfb515ef1c25aa3b04] <==
	{"level":"warn","ts":"2025-11-21T14:41:51.348469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.355899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.363821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.371840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.381500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.389815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.400926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.408490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.416397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.424906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.432930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.441267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.448742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.456591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.464500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.476209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.480898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.497303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.506230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.515097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.584003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59370","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-21T14:42:33.157048Z","caller":"traceutil/trace.go:172","msg":"trace[980987117] transaction","detail":"{read_only:false; response_revision:670; number_of_response:1; }","duration":"100.055784ms","start":"2025-11-21T14:42:33.056974Z","end":"2025-11-21T14:42:33.157030Z","steps":["trace[980987117] 'process raft request'  (duration: 99.932648ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-21T14:42:33.538301Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.095518ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-11-21T14:42:33.538859Z","caller":"traceutil/trace.go:172","msg":"trace[1178991858] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:672; }","duration":"127.570213ms","start":"2025-11-21T14:42:33.411174Z","end":"2025-11-21T14:42:33.538744Z","steps":["trace[1178991858] 'range keys from in-memory index tree'  (duration: 127.01525ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T14:42:33.805094Z","caller":"traceutil/trace.go:172","msg":"trace[2052278630] transaction","detail":"{read_only:false; response_revision:674; number_of_response:1; }","duration":"207.149457ms","start":"2025-11-21T14:42:33.597923Z","end":"2025-11-21T14:42:33.805072Z","steps":["trace[2052278630] 'process raft request'  (duration: 206.477227ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:42:46 up  1:25,  0 user,  load average: 5.04, 3.73, 2.32
	Linux default-k8s-diff-port-859276 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8b13c1531e7f2a6ba3e16cf97a2bc956ef7e797fdaf90f7a395984f6dcd0c7ab] <==
	I1121 14:41:53.116876       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:41:53.117073       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1121 14:41:53.117230       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:41:53.117246       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:41:53.117268       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:41:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:41:53.319141       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:41:53.319161       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:41:53.319169       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:41:53.409237       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 14:42:23.319386       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1121 14:42:23.319392       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1121 14:42:23.319474       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 14:42:23.319493       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1121 14:42:24.519440       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:42:24.519470       1 metrics.go:72] Registering metrics
	I1121 14:42:24.519525       1 controller.go:711] "Syncing nftables rules"
	I1121 14:42:33.318946       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 14:42:33.319025       1 main.go:301] handling current node
	I1121 14:42:43.319648       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 14:42:43.319682       1 main.go:301] handling current node
	
	
	==> kube-apiserver [46730d3f8950dfc39160a83403cde77d49652dbc3d617ab6dc02db67defa9031] <==
	I1121 14:41:52.292387       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1121 14:41:52.292404       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1121 14:41:52.292498       1 aggregator.go:171] initial CRD sync complete...
	I1121 14:41:52.292514       1 autoregister_controller.go:144] Starting autoregister controller
	I1121 14:41:52.292522       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:41:52.292530       1 cache.go:39] Caches are synced for autoregister controller
	I1121 14:41:52.295232       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1121 14:41:52.295492       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 14:41:52.295725       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1121 14:41:52.295800       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1121 14:41:52.295840       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1121 14:41:52.299081       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:41:52.322514       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 14:41:52.644554       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 14:41:52.698740       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:41:52.724468       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:41:52.734247       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:41:52.741798       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:41:52.774541       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.216.101"}
	I1121 14:41:52.788957       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.122.156"}
	I1121 14:41:53.195681       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:41:55.866883       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:41:55.916937       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:41:56.117813       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:41:56.117813       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [bec7da40a6663ecc1f4b7e19ba64af8747dad2862c844d662dec96aedce65617] <==
	I1121 14:41:55.666095       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 14:41:55.713350       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 14:41:55.713374       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 14:41:55.714578       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1121 14:41:55.714597       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 14:41:55.714607       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:41:55.714623       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:41:55.714583       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1121 14:41:55.714795       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 14:41:55.714817       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1121 14:41:55.715832       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 14:41:55.715854       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1121 14:41:55.715906       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1121 14:41:55.718053       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 14:41:55.719228       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:41:55.719256       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 14:41:55.721512       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 14:41:55.721574       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:41:55.721606       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 14:41:55.723910       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 14:41:55.729201       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:41:55.729217       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 14:41:55.729224       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 14:41:55.731415       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1121 14:41:55.732816       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f246e7bcac61f24e7f830a241a3325cbae840e2fce953f88ae1973715ca45fe4] <==
	I1121 14:41:53.024301       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:41:53.095903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:41:53.196442       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:41:53.196499       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1121 14:41:53.196644       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:41:53.218163       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:41:53.218216       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:41:53.224325       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:41:53.224836       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:41:53.224906       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:41:53.226476       1 config.go:200] "Starting service config controller"
	I1121 14:41:53.226498       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:41:53.226596       1 config.go:309] "Starting node config controller"
	I1121 14:41:53.226617       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:41:53.226624       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:41:53.226638       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:41:53.226643       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:41:53.226723       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:41:53.226778       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:41:53.327485       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:41:53.327477       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:41:53.327537       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [978c3213baf15d70be19ba61ca307b752fef7c010cd37640b6d37cbac117cab0] <==
	I1121 14:41:50.671070       1 serving.go:386] Generated self-signed cert in-memory
	W1121 14:41:52.220161       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1121 14:41:52.220960       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1121 14:41:52.220981       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1121 14:41:52.220993       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1121 14:41:52.275284       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 14:41:52.275328       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:41:52.285792       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:41:52.286137       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:41:52.287992       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 14:41:52.288411       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 14:41:52.387498       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:41:56 default-k8s-diff-port-859276 kubelet[716]: I1121 14:41:56.289916     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9hnw\" (UniqueName: \"kubernetes.io/projected/af77ab56-321e-4191-833a-c34cd80fb085-kube-api-access-f9hnw\") pod \"dashboard-metrics-scraper-6ffb444bf9-2xhmq\" (UID: \"af77ab56-321e-4191-833a-c34cd80fb085\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2xhmq"
	Nov 21 14:41:56 default-k8s-diff-port-859276 kubelet[716]: I1121 14:41:56.289961     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3ac90d1e-9e41-4140-9b48-08e97b11f9e7-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-j7rcv\" (UID: \"3ac90d1e-9e41-4140-9b48-08e97b11f9e7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-j7rcv"
	Nov 21 14:41:56 default-k8s-diff-port-859276 kubelet[716]: I1121 14:41:56.289979     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsx9m\" (UniqueName: \"kubernetes.io/projected/3ac90d1e-9e41-4140-9b48-08e97b11f9e7-kube-api-access-dsx9m\") pod \"kubernetes-dashboard-855c9754f9-j7rcv\" (UID: \"3ac90d1e-9e41-4140-9b48-08e97b11f9e7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-j7rcv"
	Nov 21 14:41:56 default-k8s-diff-port-859276 kubelet[716]: I1121 14:41:56.290001     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/af77ab56-321e-4191-833a-c34cd80fb085-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-2xhmq\" (UID: \"af77ab56-321e-4191-833a-c34cd80fb085\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2xhmq"
	Nov 21 14:41:58 default-k8s-diff-port-859276 kubelet[716]: I1121 14:41:58.858189     716 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 21 14:42:02 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:02.774462     716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-j7rcv" podStartSLOduration=3.233913839 podStartE2EDuration="6.774439437s" podCreationTimestamp="2025-11-21 14:41:56 +0000 UTC" firstStartedPulling="2025-11-21 14:41:56.506663859 +0000 UTC m=+7.012561300" lastFinishedPulling="2025-11-21 14:42:00.047189453 +0000 UTC m=+10.553086898" observedRunningTime="2025-11-21 14:42:00.701448604 +0000 UTC m=+11.207346067" watchObservedRunningTime="2025-11-21 14:42:02.774439437 +0000 UTC m=+13.280336902"
	Nov 21 14:42:03 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:03.695601     716 scope.go:117] "RemoveContainer" containerID="1b7aa4e217f55b8d6142f450b9bd029763591327ff3d253ab12f40a85a502fe5"
	Nov 21 14:42:04 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:04.700872     716 scope.go:117] "RemoveContainer" containerID="1b7aa4e217f55b8d6142f450b9bd029763591327ff3d253ab12f40a85a502fe5"
	Nov 21 14:42:04 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:04.701042     716 scope.go:117] "RemoveContainer" containerID="c3f566419ca7cc288119280617c9dbea9b26a92ebc90b789a652099c9f91f727"
	Nov 21 14:42:04 default-k8s-diff-port-859276 kubelet[716]: E1121 14:42:04.701248     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2xhmq_kubernetes-dashboard(af77ab56-321e-4191-833a-c34cd80fb085)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2xhmq" podUID="af77ab56-321e-4191-833a-c34cd80fb085"
	Nov 21 14:42:05 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:05.705674     716 scope.go:117] "RemoveContainer" containerID="c3f566419ca7cc288119280617c9dbea9b26a92ebc90b789a652099c9f91f727"
	Nov 21 14:42:05 default-k8s-diff-port-859276 kubelet[716]: E1121 14:42:05.705877     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2xhmq_kubernetes-dashboard(af77ab56-321e-4191-833a-c34cd80fb085)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2xhmq" podUID="af77ab56-321e-4191-833a-c34cd80fb085"
	Nov 21 14:42:13 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:13.590636     716 scope.go:117] "RemoveContainer" containerID="c3f566419ca7cc288119280617c9dbea9b26a92ebc90b789a652099c9f91f727"
	Nov 21 14:42:13 default-k8s-diff-port-859276 kubelet[716]: E1121 14:42:13.590929     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2xhmq_kubernetes-dashboard(af77ab56-321e-4191-833a-c34cd80fb085)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2xhmq" podUID="af77ab56-321e-4191-833a-c34cd80fb085"
	Nov 21 14:42:23 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:23.749843     716 scope.go:117] "RemoveContainer" containerID="7bc1381c40424bf5c5ec170863470ce4dfb22c650dc079426153a476b7dc54fb"
	Nov 21 14:42:26 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:26.601066     716 scope.go:117] "RemoveContainer" containerID="c3f566419ca7cc288119280617c9dbea9b26a92ebc90b789a652099c9f91f727"
	Nov 21 14:42:26 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:26.762639     716 scope.go:117] "RemoveContainer" containerID="c3f566419ca7cc288119280617c9dbea9b26a92ebc90b789a652099c9f91f727"
	Nov 21 14:42:26 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:26.762881     716 scope.go:117] "RemoveContainer" containerID="792ec4dce17b4ab7790240101a6b580c76469012712ad8190898f09de8430e58"
	Nov 21 14:42:26 default-k8s-diff-port-859276 kubelet[716]: E1121 14:42:26.763041     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2xhmq_kubernetes-dashboard(af77ab56-321e-4191-833a-c34cd80fb085)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2xhmq" podUID="af77ab56-321e-4191-833a-c34cd80fb085"
	Nov 21 14:42:33 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:33.589865     716 scope.go:117] "RemoveContainer" containerID="792ec4dce17b4ab7790240101a6b580c76469012712ad8190898f09de8430e58"
	Nov 21 14:42:33 default-k8s-diff-port-859276 kubelet[716]: E1121 14:42:33.590061     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2xhmq_kubernetes-dashboard(af77ab56-321e-4191-833a-c34cd80fb085)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2xhmq" podUID="af77ab56-321e-4191-833a-c34cd80fb085"
	Nov 21 14:42:42 default-k8s-diff-port-859276 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 14:42:42 default-k8s-diff-port-859276 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 14:42:42 default-k8s-diff-port-859276 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 21 14:42:42 default-k8s-diff-port-859276 systemd[1]: kubelet.service: Consumed 1.654s CPU time.
	
	
	==> kubernetes-dashboard [b4239d7c08405352938813c62cb6f15113973121a11d38199bd1bf0c93a47049] <==
	2025/11/21 14:42:00 Using namespace: kubernetes-dashboard
	2025/11/21 14:42:00 Using in-cluster config to connect to apiserver
	2025/11/21 14:42:00 Using secret token for csrf signing
	2025/11/21 14:42:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/21 14:42:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/21 14:42:00 Successful initial request to the apiserver, version: v1.34.1
	2025/11/21 14:42:00 Generating JWE encryption key
	2025/11/21 14:42:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/21 14:42:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/21 14:42:00 Initializing JWE encryption key from synchronized object
	2025/11/21 14:42:00 Creating in-cluster Sidecar client
	2025/11/21 14:42:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 14:42:00 Serving insecurely on HTTP port: 9090
	2025/11/21 14:42:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 14:42:00 Starting overwatch
	
	
	==> storage-provisioner [7bc1381c40424bf5c5ec170863470ce4dfb22c650dc079426153a476b7dc54fb] <==
	I1121 14:41:52.983129       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1121 14:42:22.987961       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d52774abd97b7c8a03b60d341da70eead219e69b9d2e38977c51930994e08e91] <==
	I1121 14:42:23.810124       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:42:23.817507       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:42:23.817623       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 14:42:23.819698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:27.275675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:31.536195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:35.135249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:38.189768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:41.212658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:41.216509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:42:41.216678       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:42:41.216805       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"527dbf43-5986-4266-9001-2722967aec7b", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-859276_3843f557-a18a-46d3-9585-b75033559ed1 became leader
	I1121 14:42:41.216821       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-859276_3843f557-a18a-46d3-9585-b75033559ed1!
	W1121 14:42:41.218391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:41.220984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:42:41.317013       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-859276_3843f557-a18a-46d3-9585-b75033559ed1!
	W1121 14:42:43.224917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:43.229918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:45.233961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:45.238077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:47.241314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:47.374730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-859276 -n default-k8s-diff-port-859276
E1121 14:42:48.292787   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:42:48.299519   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:42:48.313257   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:42:48.334663   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:42:48.376692   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-859276 -n default-k8s-diff-port-859276: exit status 2 (593.937453ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-859276 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E1121 14:42:48.458343   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-859276
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-859276:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2d534a2a3b1fe7d6e16c898b8a16a17cbf02051799732b931d600187683c7eac",
	        "Created": "2025-11-21T14:40:05.048409185Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300725,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T14:41:42.673243364Z",
	            "FinishedAt": "2025-11-21T14:41:41.622503376Z"
	        },
	        "Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
	        "ResolvConfPath": "/var/lib/docker/containers/2d534a2a3b1fe7d6e16c898b8a16a17cbf02051799732b931d600187683c7eac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2d534a2a3b1fe7d6e16c898b8a16a17cbf02051799732b931d600187683c7eac/hostname",
	        "HostsPath": "/var/lib/docker/containers/2d534a2a3b1fe7d6e16c898b8a16a17cbf02051799732b931d600187683c7eac/hosts",
	        "LogPath": "/var/lib/docker/containers/2d534a2a3b1fe7d6e16c898b8a16a17cbf02051799732b931d600187683c7eac/2d534a2a3b1fe7d6e16c898b8a16a17cbf02051799732b931d600187683c7eac-json.log",
	        "Name": "/default-k8s-diff-port-859276",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-859276:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-859276",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2d534a2a3b1fe7d6e16c898b8a16a17cbf02051799732b931d600187683c7eac",
	                "LowerDir": "/var/lib/docker/overlay2/37ea76c5bad6d11bb4dd4e337b8935aab81c4bc411039c222a9d057b3c4b3202-init/diff:/var/lib/docker/overlay2/52a35d389e2d282a009a54c52e3f2c3f22a4d7ab5a2644a29d16127d21682576/diff",
	                "MergedDir": "/var/lib/docker/overlay2/37ea76c5bad6d11bb4dd4e337b8935aab81c4bc411039c222a9d057b3c4b3202/merged",
	                "UpperDir": "/var/lib/docker/overlay2/37ea76c5bad6d11bb4dd4e337b8935aab81c4bc411039c222a9d057b3c4b3202/diff",
	                "WorkDir": "/var/lib/docker/overlay2/37ea76c5bad6d11bb4dd4e337b8935aab81c4bc411039c222a9d057b3c4b3202/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-859276",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-859276/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-859276",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-859276",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-859276",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5d74d31ab700c5fab742a714db55c5616360619eba3f6a1ba886fb12f0b38cc4",
	            "SandboxKey": "/var/run/docker/netns/5d74d31ab700",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-859276": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "160a2921f00f660205c7789d2cbe27b525c000c5d85520fd19733f7917bfd7fd",
	                    "EndpointID": "c33e9ca87d54f1c1251daed15c8e67c9a56b12158d88d405240c25b97e6e25b0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "a2:8d:9e:f6:45:6a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-859276",
	                        "2d534a2a3b1f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-859276 -n default-k8s-diff-port-859276
E1121 14:42:48.620080   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:42:48.941632   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-859276 -n default-k8s-diff-port-859276: exit status 2 (553.399828ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-859276 logs -n 25
E1121 14:42:49.583720   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-859276 logs -n 25: (1.304496197s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                  │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-989875 sudo journalctl -xeu kubelet --all --full --no-pager                                                          │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo cat /etc/kubernetes/kubelet.conf                                                                         │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo cat /var/lib/kubelet/config.yaml                                                                         │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo systemctl status docker --all --full --no-pager                                                          │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p custom-flannel-989875 sudo systemctl cat docker --no-pager                                                                          │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo cat /etc/docker/daemon.json                                                                              │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p custom-flannel-989875 sudo docker system info                                                                                       │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p custom-flannel-989875 sudo systemctl status cri-docker --all --full --no-pager                                                      │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p custom-flannel-989875 sudo systemctl cat cri-docker --no-pager                                                                      │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                 │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p custom-flannel-989875 sudo cat /usr/lib/systemd/system/cri-docker.service                                                           │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo cri-dockerd --version                                                                                    │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo systemctl status containerd --all --full --no-pager                                                      │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ ssh     │ -p custom-flannel-989875 sudo systemctl cat containerd --no-pager                                                                      │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo cat /lib/systemd/system/containerd.service                                                               │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo cat /etc/containerd/config.toml                                                                          │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo containerd config dump                                                                                   │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo systemctl status crio --all --full --no-pager                                                            │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo systemctl cat crio --no-pager                                                                            │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                  │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ ssh     │ -p custom-flannel-989875 sudo crio config                                                                                              │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ delete  │ -p custom-flannel-989875                                                                                                               │ custom-flannel-989875        │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ image   │ default-k8s-diff-port-859276 image list --format=json                                                                                  │ default-k8s-diff-port-859276 │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │ 21 Nov 25 14:42 UTC │
	│ start   │ -p bridge-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio │ bridge-989875                │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	│ pause   │ -p default-k8s-diff-port-859276 --alsologtostderr -v=1                                                                                 │ default-k8s-diff-port-859276 │ jenkins │ v1.37.0 │ 21 Nov 25 14:42 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 14:42:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 14:42:42.017035  323150 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:42:42.017172  323150 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:42:42.017184  323150 out.go:374] Setting ErrFile to fd 2...
	I1121 14:42:42.017190  323150 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:42:42.017379  323150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:42:42.017885  323150 out.go:368] Setting JSON to false
	I1121 14:42:42.019217  323150 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5111,"bootTime":1763731051,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:42:42.019329  323150 start.go:143] virtualization: kvm guest
	I1121 14:42:42.022501  323150 out.go:179] * [bridge-989875] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:42:42.024235  323150 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:42:42.024266  323150 notify.go:221] Checking for updates...
	I1121 14:42:42.026645  323150 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:42:42.027821  323150 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:42:42.032777  323150 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 14:42:42.033850  323150 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:42:42.035370  323150 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:42:42.099604  310504 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 14:42:42.100040  310504 kubeadm.go:319] 
	I1121 14:42:42.100134  310504 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 14:42:42.100140  310504 kubeadm.go:319] 
	I1121 14:42:42.100241  310504 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 14:42:42.100247  310504 kubeadm.go:319] 
	I1121 14:42:42.100283  310504 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 14:42:42.100365  310504 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 14:42:42.100430  310504 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 14:42:42.100436  310504 kubeadm.go:319] 
	I1121 14:42:42.100504  310504 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 14:42:42.100510  310504 kubeadm.go:319] 
	I1121 14:42:42.100607  310504 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 14:42:42.100615  310504 kubeadm.go:319] 
	I1121 14:42:42.100698  310504 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 14:42:42.100805  310504 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 14:42:42.100903  310504 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 14:42:42.100910  310504 kubeadm.go:319] 
	I1121 14:42:42.101009  310504 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 14:42:42.101104  310504 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 14:42:42.101110  310504 kubeadm.go:319] 
	I1121 14:42:42.101209  310504 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token k1x4rx.bzruoumyrdb3i6q0 \
	I1121 14:42:42.101332  310504 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f61f1a5a9a2c6e402420e419bcf82211dd9cf42c2d71b101000a986289f66d60 \
	I1121 14:42:42.101357  310504 kubeadm.go:319] 	--control-plane 
	I1121 14:42:42.101363  310504 kubeadm.go:319] 
	I1121 14:42:42.101464  310504 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 14:42:42.101470  310504 kubeadm.go:319] 
	I1121 14:42:42.101581  310504 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token k1x4rx.bzruoumyrdb3i6q0 \
	I1121 14:42:42.101722  310504 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f61f1a5a9a2c6e402420e419bcf82211dd9cf42c2d71b101000a986289f66d60 
	I1121 14:42:42.105964  310504 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 14:42:42.106116  310504 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 14:42:42.106139  310504 cni.go:84] Creating CNI manager for "bridge"
	I1121 14:42:42.107874  310504 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1121 14:42:42.037502  323150 config.go:182] Loaded profile config "default-k8s-diff-port-859276": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:42:42.037643  323150 config.go:182] Loaded profile config "enable-default-cni-989875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:42:42.037788  323150 config.go:182] Loaded profile config "flannel-989875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:42:42.037919  323150 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:42:42.063469  323150 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:42:42.063609  323150 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:42:42.136260  323150 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-21 14:42:42.122256423 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:42:42.136411  323150 docker.go:319] overlay module found
	I1121 14:42:42.138109  323150 out.go:179] * Using the docker driver based on user configuration
	I1121 14:42:42.139445  323150 start.go:309] selected driver: docker
	I1121 14:42:42.139456  323150 start.go:930] validating driver "docker" against <nil>
	I1121 14:42:42.139469  323150 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:42:42.140677  323150 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:42:42.222292  323150 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-21 14:42:42.210241773 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:42:42.222497  323150 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 14:42:42.222929  323150 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 14:42:42.224407  323150 out.go:179] * Using Docker driver with root privileges
	I1121 14:42:42.227409  323150 cni.go:84] Creating CNI manager for "bridge"
	I1121 14:42:42.227438  323150 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1121 14:42:42.227515  323150 start.go:353] cluster config:
	{Name:bridge-989875 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:bridge-989875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:42:42.228938  323150 out.go:179] * Starting "bridge-989875" primary control-plane node in "bridge-989875" cluster
	I1121 14:42:42.230343  323150 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 14:42:42.231822  323150 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
	I1121 14:42:42.233326  323150 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:42:42.233365  323150 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1121 14:42:42.233375  323150 cache.go:65] Caching tarball of preloaded images
	I1121 14:42:42.233469  323150 preload.go:238] Found /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1121 14:42:42.233480  323150 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 14:42:42.233626  323150 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/bridge-989875/config.json ...
	I1121 14:42:42.233652  323150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/bridge-989875/config.json: {Name:mk930f7e855eb7acbedfd0b5d29db36ede4ab530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:42:42.233795  323150 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
	I1121 14:42:42.264057  323150 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
	I1121 14:42:42.264082  323150 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
	I1121 14:42:42.264101  323150 cache.go:243] Successfully downloaded all kic artifacts
	I1121 14:42:42.264134  323150 start.go:360] acquireMachinesLock for bridge-989875: {Name:mk2d0c076b6dcc60f8c2dc133df0fce473032530 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 14:42:42.264243  323150 start.go:364] duration metric: took 89.139µs to acquireMachinesLock for "bridge-989875"
	I1121 14:42:42.264312  323150 start.go:93] Provisioning new machine with config: &{Name:bridge-989875 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:bridge-989875 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:42:42.264442  323150 start.go:125] createHost starting for "" (driver="docker")
	I1121 14:42:40.221632  316581 out.go:252]   - Generating certificates and keys ...
	I1121 14:42:40.221741  316581 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 14:42:40.221838  316581 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 14:42:40.825104  316581 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 14:42:41.022919  316581 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 14:42:41.600308  316581 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 14:42:42.051949  316581 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 14:42:42.393607  316581 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 14:42:42.393768  316581 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [flannel-989875 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1121 14:42:42.878175  316581 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 14:42:42.878364  316581 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [flannel-989875 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1121 14:42:43.153122  316581 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 14:42:43.591699  316581 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 14:42:43.716974  316581 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 14:42:43.717155  316581 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 14:42:44.366703  316581 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 14:42:44.494863  316581 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 14:42:45.157765  316581 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 14:42:45.522460  316581 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 14:42:45.836879  316581 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 14:42:45.836997  316581 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 14:42:45.868047  316581 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 14:42:42.111704  310504 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1121 14:42:42.122477  310504 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1121 14:42:42.139107  310504 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 14:42:42.139245  310504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:42:42.139335  310504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-989875 minikube.k8s.io/updated_at=2025_11_21T14_42_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162 minikube.k8s.io/name=enable-default-cni-989875 minikube.k8s.io/primary=true
	I1121 14:42:42.244691  310504 ops.go:34] apiserver oom_adj: -16
	I1121 14:42:42.244834  310504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:42:42.744900  310504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:42:43.245437  310504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:42:43.745537  310504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:42:44.245749  310504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:42:44.745752  310504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:42:45.245748  310504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:42:45.745774  310504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:42:46.245933  310504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:42:42.267404  323150 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1121 14:42:42.268187  323150 start.go:159] libmachine.API.Create for "bridge-989875" (driver="docker")
	I1121 14:42:42.268225  323150 client.go:173] LocalClient.Create starting
	I1121 14:42:42.268320  323150 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-11045/.minikube/certs/ca.pem
	I1121 14:42:42.268380  323150 main.go:143] libmachine: Decoding PEM data...
	I1121 14:42:42.268437  323150 main.go:143] libmachine: Parsing certificate...
	I1121 14:42:42.268613  323150 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21847-11045/.minikube/certs/cert.pem
	I1121 14:42:42.268679  323150 main.go:143] libmachine: Decoding PEM data...
	I1121 14:42:42.268696  323150 main.go:143] libmachine: Parsing certificate...
	I1121 14:42:42.269116  323150 cli_runner.go:164] Run: docker network inspect bridge-989875 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 14:42:42.293487  323150 cli_runner.go:211] docker network inspect bridge-989875 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 14:42:42.293585  323150 network_create.go:284] running [docker network inspect bridge-989875] to gather additional debugging logs...
	I1121 14:42:42.293619  323150 cli_runner.go:164] Run: docker network inspect bridge-989875
	W1121 14:42:42.314566  323150 cli_runner.go:211] docker network inspect bridge-989875 returned with exit code 1
	I1121 14:42:42.314599  323150 network_create.go:287] error running [docker network inspect bridge-989875]: docker network inspect bridge-989875: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network bridge-989875 not found
	I1121 14:42:42.314618  323150 network_create.go:289] output of [docker network inspect bridge-989875]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network bridge-989875 not found
	
	** /stderr **
	I1121 14:42:42.314741  323150 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 14:42:42.335301  323150 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-28b1c9d83f01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:19:47:f8:32:b5} reservation:<nil>}
	I1121 14:42:42.336154  323150 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-701670d7ab7f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4a:ae:cd:b4:3f:5e} reservation:<nil>}
	I1121 14:42:42.337110  323150 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-753e8bd7b54d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:aa:87:d4:c1:6c:14} reservation:<nil>}
	I1121 14:42:42.337734  323150 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-160a2921f00f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ae:95:ac:10:d1:17} reservation:<nil>}
	I1121 14:42:42.338823  323150 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-1b0bab67c782 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ee:71:88:15:bd:85} reservation:<nil>}
	I1121 14:42:42.339861  323150 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-8b33693d17c1 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:6a:aa:4b:11:b2:fa} reservation:<nil>}
	I1121 14:42:42.340998  323150 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb2560}
	I1121 14:42:42.341027  323150 network_create.go:124] attempt to create docker network bridge-989875 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1121 14:42:42.341082  323150 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-989875 bridge-989875
	I1121 14:42:42.408997  323150 network_create.go:108] docker network bridge-989875 192.168.103.0/24 created
	I1121 14:42:42.409095  323150 kic.go:121] calculated static IP "192.168.103.2" for the "bridge-989875" container
	I1121 14:42:42.409192  323150 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 14:42:42.429273  323150 cli_runner.go:164] Run: docker volume create bridge-989875 --label name.minikube.sigs.k8s.io=bridge-989875 --label created_by.minikube.sigs.k8s.io=true
	I1121 14:42:42.450170  323150 oci.go:103] Successfully created a docker volume bridge-989875
	I1121 14:42:42.450267  323150 cli_runner.go:164] Run: docker run --rm --name bridge-989875-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-989875 --entrypoint /usr/bin/test -v bridge-989875:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
	I1121 14:42:42.875320  323150 oci.go:107] Successfully prepared a docker volume bridge-989875
	I1121 14:42:42.875402  323150 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 14:42:42.875420  323150 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 14:42:42.875486  323150 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v bridge-989875:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 14:42:46.745962  310504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:42:47.245647  310504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:42:47.745044  310504 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 14:42:47.866955  310504 kubeadm.go:1114] duration metric: took 5.727741438s to wait for elevateKubeSystemPrivileges
	I1121 14:42:47.867001  310504 kubeadm.go:403] duration metric: took 16.866424511s to StartCluster
	I1121 14:42:47.867024  310504 settings.go:142] acquiring lock: {Name:mkb207cf001a407898b2dbfd9fb9b3881f173a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:42:47.867112  310504 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:42:47.868804  310504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21847-11045/kubeconfig: {Name:mk8b4d5da99d04cbf7f231b23fa7715aca379ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 14:42:47.869181  310504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 14:42:47.869756  310504 config.go:182] Loaded profile config "enable-default-cni-989875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:42:47.870758  310504 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 14:42:47.870861  310504 addons.go:70] Setting storage-provisioner=true in profile "enable-default-cni-989875"
	I1121 14:42:47.870912  310504 addons.go:239] Setting addon storage-provisioner=true in "enable-default-cni-989875"
	I1121 14:42:47.870942  310504 host.go:66] Checking if "enable-default-cni-989875" exists ...
	I1121 14:42:47.870998  310504 addons.go:70] Setting default-storageclass=true in profile "enable-default-cni-989875"
	I1121 14:42:47.871224  310504 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-989875"
	I1121 14:42:47.871629  310504 cli_runner.go:164] Run: docker container inspect enable-default-cni-989875 --format={{.State.Status}}
	I1121 14:42:47.872196  310504 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 14:42:47.875140  310504 cli_runner.go:164] Run: docker container inspect enable-default-cni-989875 --format={{.State.Status}}
	I1121 14:42:47.876049  310504 out.go:179] * Verifying Kubernetes components...
	I1121 14:42:47.877228  310504 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 14:42:47.915263  310504 addons.go:239] Setting addon default-storageclass=true in "enable-default-cni-989875"
	I1121 14:42:47.915317  310504 host.go:66] Checking if "enable-default-cni-989875" exists ...
	I1121 14:42:47.915641  310504 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 14:42:45.877397  316581 out.go:252]   - Booting up control plane ...
	I1121 14:42:45.877511  316581 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 14:42:45.877628  316581 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 14:42:45.877743  316581 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 14:42:45.888371  316581 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 14:42:45.888590  316581 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 14:42:45.895518  316581 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 14:42:45.895777  316581 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 14:42:45.895853  316581 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 14:42:45.995423  316581 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 14:42:45.995611  316581 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 14:42:46.996786  316581 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001415254s
	I1121 14:42:46.999707  316581 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 14:42:46.999829  316581 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1121 14:42:46.999967  316581 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 14:42:47.000044  316581 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 14:42:47.916034  310504 cli_runner.go:164] Run: docker container inspect enable-default-cni-989875 --format={{.State.Status}}
	I1121 14:42:47.919954  310504 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:42:47.919973  310504 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 14:42:47.920027  310504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-989875
	I1121 14:42:47.953218  310504 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 14:42:47.953336  310504 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 14:42:47.953446  310504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-989875
	I1121 14:42:47.957618  310504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/enable-default-cni-989875/id_rsa Username:docker}
	I1121 14:42:48.003534  310504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/enable-default-cni-989875/id_rsa Username:docker}
	I1121 14:42:48.042280  310504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 14:42:48.125947  310504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 14:42:48.148407  310504 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 14:42:48.190893  310504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 14:42:48.523834  310504 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1121 14:42:48.821021  310504 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-989875" to be "Ready" ...
	I1121 14:42:48.832184  310504 node_ready.go:49] node "enable-default-cni-989875" is "Ready"
	I1121 14:42:48.832337  310504 node_ready.go:38] duration metric: took 10.668106ms for node "enable-default-cni-989875" to be "Ready" ...
	I1121 14:42:48.832480  310504 api_server.go:52] waiting for apiserver process to appear ...
	I1121 14:42:48.832645  310504 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:42:48.843346  310504 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Nov 21 14:42:26 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:26.611076223Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:42:26 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:26.611737396Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 14:42:26 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:26.636537164Z" level=info msg="Created container 792ec4dce17b4ab7790240101a6b580c76469012712ad8190898f09de8430e58: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2xhmq/dashboard-metrics-scraper" id=8b931834-a088-4491-b5fa-7704ec101107 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 14:42:26 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:26.637217562Z" level=info msg="Starting container: 792ec4dce17b4ab7790240101a6b580c76469012712ad8190898f09de8430e58" id=7ac607a2-0450-4366-850f-f7754ba17ae9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 14:42:26 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:26.639447479Z" level=info msg="Started container" PID=1706 containerID=792ec4dce17b4ab7790240101a6b580c76469012712ad8190898f09de8430e58 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2xhmq/dashboard-metrics-scraper id=7ac607a2-0450-4366-850f-f7754ba17ae9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f93865affc737d128af20ba31e6b1c8936eb856231eb7aadb41f5d2bf14fc837
	Nov 21 14:42:26 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:26.764379761Z" level=info msg="Removing container: c3f566419ca7cc288119280617c9dbea9b26a92ebc90b789a652099c9f91f727" id=c10811e1-4029-4a09-811b-695babd0ffdc name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 14:42:26 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:26.775872157Z" level=info msg="Removed container c3f566419ca7cc288119280617c9dbea9b26a92ebc90b789a652099c9f91f727: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2xhmq/dashboard-metrics-scraper" id=c10811e1-4029-4a09-811b-695babd0ffdc name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.319264228Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.323884244Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.323914015Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.323937481Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.327635698Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.327663299Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.327683044Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.33111884Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.33113928Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.33115438Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.334413593Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.334433516Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.406300835Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.412212595Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.412385669Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.412429228Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.416830417Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 21 14:42:33 default-k8s-diff-port-859276 crio[565]: time="2025-11-21T14:42:33.416856827Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	792ec4dce17b4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   f93865affc737       dashboard-metrics-scraper-6ffb444bf9-2xhmq             kubernetes-dashboard
	d52774abd97b7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago      Running             storage-provisioner         1                   35f6ec0c62555       storage-provisioner                                    kube-system
	b4239d7c08405       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   49 seconds ago      Running             kubernetes-dashboard        0                   941e23f361748       kubernetes-dashboard-855c9754f9-j7rcv                  kubernetes-dashboard
	85f36b2dcfe6f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   8a247c11be3d4       busybox                                                default
	6275d647f5a78       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   666af12636f0e       coredns-66bc5c9577-wq9lw                               kube-system
	7bc1381c40424       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   35f6ec0c62555       storage-provisioner                                    kube-system
	f246e7bcac61f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago      Running             kube-proxy                  0                   a2dde58cc92db       kube-proxy-vwzb2                                       kube-system
	8b13c1531e7f2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   35f18666e4168       kindnet-28knv                                          kube-system
	c119f5ad65c9c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   8218fc154f4ba       etcd-default-k8s-diff-port-859276                      kube-system
	978c3213baf15       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   d210dcb5cf9a4       kube-scheduler-default-k8s-diff-port-859276            kube-system
	bec7da40a6663       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   08460b623962b       kube-controller-manager-default-k8s-diff-port-859276   kube-system
	46730d3f8950d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   6491bcb036cb7       kube-apiserver-default-k8s-diff-port-859276            kube-system
	
	
	==> coredns [6275d647f5a78c60d9721c98980ddc7c05db78ff9c3966dde958a62fc27e8523] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54570 - 64318 "HINFO IN 661135359172939465.2483412663312711059. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.484856595s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-859276
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-859276
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=29e0798733fefbdc471fd2bbb38f6a7ae2a26162
	                    minikube.k8s.io/name=default-k8s-diff-port-859276
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T14_40_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 14:40:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-859276
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 14:42:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 14:42:23 +0000   Fri, 21 Nov 2025 14:40:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 14:42:23 +0000   Fri, 21 Nov 2025 14:40:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 14:42:23 +0000   Fri, 21 Nov 2025 14:40:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 14:42:23 +0000   Fri, 21 Nov 2025 14:41:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-859276
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf10fb2f940d419c1d138723691cfee8
	  System UUID:                343bd2de-0163-43e5-a948-02f67c21f6df
	  Boot ID:                    4deed74e-d1ae-403f-b303-7338c681df31
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-wq9lw                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m20s
	  kube-system                 etcd-default-k8s-diff-port-859276                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m26s
	  kube-system                 kindnet-28knv                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-default-k8s-diff-port-859276             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-859276    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-vwzb2                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-default-k8s-diff-port-859276             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2xhmq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-j7rcv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m19s                  kube-proxy       
	  Normal  Starting                 56s                    kube-proxy       
	  Normal  Starting                 2m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m31s (x8 over 2m31s)  kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m31s (x8 over 2m31s)  kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m31s (x8 over 2m31s)  kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m25s                  kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m25s                  kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m25s                  kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m25s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m21s                  node-controller  Node default-k8s-diff-port-859276 event: Registered Node default-k8s-diff-port-859276 in Controller
	  Normal  NodeReady                99s                    kubelet          Node default-k8s-diff-port-859276 status is now: NodeReady
	  Normal  Starting                 60s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)      kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)      kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)      kubelet          Node default-k8s-diff-port-859276 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                    node-controller  Node default-k8s-diff-port-859276 event: Registered Node default-k8s-diff-port-859276 in Controller
	
	
	==> dmesg <==
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +1.023964] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +2.047715] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +4.031557] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[  +8.511113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[ +16.382337] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[Nov21 13:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee 89 fe 72 6d e7 72 be df 09 02 f4 08 00
	[Nov21 14:42] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 89 9f d1 1f 4a 08 06
	[  +0.000384] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 4a d9 e5 66 e3 a7 08 06
	[ +27.419124] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 5b cd 8c a5 c6 08 06
	[  +0.037375] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3e 0d f9 ea 1b 38 08 06
	
	
	==> etcd [c119f5ad65c9c3667299aef384e5841ae27efc53c987bcbfb515ef1c25aa3b04] <==
	{"level":"warn","ts":"2025-11-21T14:41:51.355899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.363821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.371840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.381500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.389815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.400926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.408490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.416397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.424906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.432930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.441267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.448742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.456591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.464500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.476209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.480898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.497303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.506230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.515097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T14:41:51.584003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59370","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-21T14:42:33.157048Z","caller":"traceutil/trace.go:172","msg":"trace[980987117] transaction","detail":"{read_only:false; response_revision:670; number_of_response:1; }","duration":"100.055784ms","start":"2025-11-21T14:42:33.056974Z","end":"2025-11-21T14:42:33.157030Z","steps":["trace[980987117] 'process raft request'  (duration: 99.932648ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-21T14:42:33.538301Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.095518ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-11-21T14:42:33.538859Z","caller":"traceutil/trace.go:172","msg":"trace[1178991858] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:672; }","duration":"127.570213ms","start":"2025-11-21T14:42:33.411174Z","end":"2025-11-21T14:42:33.538744Z","steps":["trace[1178991858] 'range keys from in-memory index tree'  (duration: 127.01525ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T14:42:33.805094Z","caller":"traceutil/trace.go:172","msg":"trace[2052278630] transaction","detail":"{read_only:false; response_revision:674; number_of_response:1; }","duration":"207.149457ms","start":"2025-11-21T14:42:33.597923Z","end":"2025-11-21T14:42:33.805072Z","steps":["trace[2052278630] 'process raft request'  (duration: 206.477227ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T14:42:47.373662Z","caller":"traceutil/trace.go:172","msg":"trace[995251664] transaction","detail":"{read_only:false; response_revision:681; number_of_response:1; }","duration":"130.3345ms","start":"2025-11-21T14:42:47.243308Z","end":"2025-11-21T14:42:47.373643Z","steps":["trace[995251664] 'process raft request'  (duration: 130.184499ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:42:50 up  1:25,  0 user,  load average: 6.56, 4.07, 2.44
	Linux default-k8s-diff-port-859276 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8b13c1531e7f2a6ba3e16cf97a2bc956ef7e797fdaf90f7a395984f6dcd0c7ab] <==
	I1121 14:41:53.116876       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 14:41:53.117073       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1121 14:41:53.117230       1 main.go:148] setting mtu 1500 for CNI 
	I1121 14:41:53.117246       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 14:41:53.117268       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T14:41:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 14:41:53.319141       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 14:41:53.319161       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 14:41:53.319169       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 14:41:53.409237       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 14:42:23.319386       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1121 14:42:23.319392       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1121 14:42:23.319474       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 14:42:23.319493       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1121 14:42:24.519440       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 14:42:24.519470       1 metrics.go:72] Registering metrics
	I1121 14:42:24.519525       1 controller.go:711] "Syncing nftables rules"
	I1121 14:42:33.318946       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 14:42:33.319025       1 main.go:301] handling current node
	I1121 14:42:43.319648       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1121 14:42:43.319682       1 main.go:301] handling current node
	
	
	==> kube-apiserver [46730d3f8950dfc39160a83403cde77d49652dbc3d617ab6dc02db67defa9031] <==
	I1121 14:41:52.292387       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1121 14:41:52.292404       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1121 14:41:52.292498       1 aggregator.go:171] initial CRD sync complete...
	I1121 14:41:52.292514       1 autoregister_controller.go:144] Starting autoregister controller
	I1121 14:41:52.292522       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1121 14:41:52.292530       1 cache.go:39] Caches are synced for autoregister controller
	I1121 14:41:52.295232       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1121 14:41:52.295492       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 14:41:52.295725       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1121 14:41:52.295800       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1121 14:41:52.295840       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1121 14:41:52.299081       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1121 14:41:52.322514       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 14:41:52.644554       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 14:41:52.698740       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 14:41:52.724468       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 14:41:52.734247       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 14:41:52.741798       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 14:41:52.774541       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.216.101"}
	I1121 14:41:52.788957       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.122.156"}
	I1121 14:41:53.195681       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 14:41:55.866883       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 14:41:55.916937       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 14:41:56.117813       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 14:41:56.117813       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [bec7da40a6663ecc1f4b7e19ba64af8747dad2862c844d662dec96aedce65617] <==
	I1121 14:41:55.666095       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 14:41:55.713350       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 14:41:55.713374       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1121 14:41:55.714578       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1121 14:41:55.714597       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 14:41:55.714607       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 14:41:55.714623       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 14:41:55.714583       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1121 14:41:55.714795       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 14:41:55.714817       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1121 14:41:55.715832       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 14:41:55.715854       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1121 14:41:55.715906       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1121 14:41:55.718053       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 14:41:55.719228       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:41:55.719256       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 14:41:55.721512       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 14:41:55.721574       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 14:41:55.721606       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 14:41:55.723910       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1121 14:41:55.729201       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 14:41:55.729217       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 14:41:55.729224       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1121 14:41:55.731415       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1121 14:41:55.732816       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f246e7bcac61f24e7f830a241a3325cbae840e2fce953f88ae1973715ca45fe4] <==
	I1121 14:41:53.024301       1 server_linux.go:53] "Using iptables proxy"
	I1121 14:41:53.095903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 14:41:53.196442       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 14:41:53.196499       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1121 14:41:53.196644       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 14:41:53.218163       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 14:41:53.218216       1 server_linux.go:132] "Using iptables Proxier"
	I1121 14:41:53.224325       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 14:41:53.224836       1 server.go:527] "Version info" version="v1.34.1"
	I1121 14:41:53.224906       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:41:53.226476       1 config.go:200] "Starting service config controller"
	I1121 14:41:53.226498       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 14:41:53.226596       1 config.go:309] "Starting node config controller"
	I1121 14:41:53.226617       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 14:41:53.226624       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 14:41:53.226638       1 config.go:106] "Starting endpoint slice config controller"
	I1121 14:41:53.226643       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 14:41:53.226723       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 14:41:53.226778       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 14:41:53.327485       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 14:41:53.327477       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 14:41:53.327537       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [978c3213baf15d70be19ba61ca307b752fef7c010cd37640b6d37cbac117cab0] <==
	I1121 14:41:50.671070       1 serving.go:386] Generated self-signed cert in-memory
	W1121 14:41:52.220161       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1121 14:41:52.220960       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1121 14:41:52.220981       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1121 14:41:52.220993       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1121 14:41:52.275284       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 14:41:52.275328       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 14:41:52.285792       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:41:52.286137       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 14:41:52.287992       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 14:41:52.288411       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 14:41:52.387498       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 14:41:56 default-k8s-diff-port-859276 kubelet[716]: I1121 14:41:56.289916     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9hnw\" (UniqueName: \"kubernetes.io/projected/af77ab56-321e-4191-833a-c34cd80fb085-kube-api-access-f9hnw\") pod \"dashboard-metrics-scraper-6ffb444bf9-2xhmq\" (UID: \"af77ab56-321e-4191-833a-c34cd80fb085\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2xhmq"
	Nov 21 14:41:56 default-k8s-diff-port-859276 kubelet[716]: I1121 14:41:56.289961     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3ac90d1e-9e41-4140-9b48-08e97b11f9e7-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-j7rcv\" (UID: \"3ac90d1e-9e41-4140-9b48-08e97b11f9e7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-j7rcv"
	Nov 21 14:41:56 default-k8s-diff-port-859276 kubelet[716]: I1121 14:41:56.289979     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsx9m\" (UniqueName: \"kubernetes.io/projected/3ac90d1e-9e41-4140-9b48-08e97b11f9e7-kube-api-access-dsx9m\") pod \"kubernetes-dashboard-855c9754f9-j7rcv\" (UID: \"3ac90d1e-9e41-4140-9b48-08e97b11f9e7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-j7rcv"
	Nov 21 14:41:56 default-k8s-diff-port-859276 kubelet[716]: I1121 14:41:56.290001     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/af77ab56-321e-4191-833a-c34cd80fb085-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-2xhmq\" (UID: \"af77ab56-321e-4191-833a-c34cd80fb085\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2xhmq"
	Nov 21 14:41:58 default-k8s-diff-port-859276 kubelet[716]: I1121 14:41:58.858189     716 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 21 14:42:02 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:02.774462     716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-j7rcv" podStartSLOduration=3.233913839 podStartE2EDuration="6.774439437s" podCreationTimestamp="2025-11-21 14:41:56 +0000 UTC" firstStartedPulling="2025-11-21 14:41:56.506663859 +0000 UTC m=+7.012561300" lastFinishedPulling="2025-11-21 14:42:00.047189453 +0000 UTC m=+10.553086898" observedRunningTime="2025-11-21 14:42:00.701448604 +0000 UTC m=+11.207346067" watchObservedRunningTime="2025-11-21 14:42:02.774439437 +0000 UTC m=+13.280336902"
	Nov 21 14:42:03 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:03.695601     716 scope.go:117] "RemoveContainer" containerID="1b7aa4e217f55b8d6142f450b9bd029763591327ff3d253ab12f40a85a502fe5"
	Nov 21 14:42:04 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:04.700872     716 scope.go:117] "RemoveContainer" containerID="1b7aa4e217f55b8d6142f450b9bd029763591327ff3d253ab12f40a85a502fe5"
	Nov 21 14:42:04 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:04.701042     716 scope.go:117] "RemoveContainer" containerID="c3f566419ca7cc288119280617c9dbea9b26a92ebc90b789a652099c9f91f727"
	Nov 21 14:42:04 default-k8s-diff-port-859276 kubelet[716]: E1121 14:42:04.701248     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2xhmq_kubernetes-dashboard(af77ab56-321e-4191-833a-c34cd80fb085)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2xhmq" podUID="af77ab56-321e-4191-833a-c34cd80fb085"
	Nov 21 14:42:05 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:05.705674     716 scope.go:117] "RemoveContainer" containerID="c3f566419ca7cc288119280617c9dbea9b26a92ebc90b789a652099c9f91f727"
	Nov 21 14:42:05 default-k8s-diff-port-859276 kubelet[716]: E1121 14:42:05.705877     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2xhmq_kubernetes-dashboard(af77ab56-321e-4191-833a-c34cd80fb085)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2xhmq" podUID="af77ab56-321e-4191-833a-c34cd80fb085"
	Nov 21 14:42:13 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:13.590636     716 scope.go:117] "RemoveContainer" containerID="c3f566419ca7cc288119280617c9dbea9b26a92ebc90b789a652099c9f91f727"
	Nov 21 14:42:13 default-k8s-diff-port-859276 kubelet[716]: E1121 14:42:13.590929     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2xhmq_kubernetes-dashboard(af77ab56-321e-4191-833a-c34cd80fb085)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2xhmq" podUID="af77ab56-321e-4191-833a-c34cd80fb085"
	Nov 21 14:42:23 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:23.749843     716 scope.go:117] "RemoveContainer" containerID="7bc1381c40424bf5c5ec170863470ce4dfb22c650dc079426153a476b7dc54fb"
	Nov 21 14:42:26 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:26.601066     716 scope.go:117] "RemoveContainer" containerID="c3f566419ca7cc288119280617c9dbea9b26a92ebc90b789a652099c9f91f727"
	Nov 21 14:42:26 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:26.762639     716 scope.go:117] "RemoveContainer" containerID="c3f566419ca7cc288119280617c9dbea9b26a92ebc90b789a652099c9f91f727"
	Nov 21 14:42:26 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:26.762881     716 scope.go:117] "RemoveContainer" containerID="792ec4dce17b4ab7790240101a6b580c76469012712ad8190898f09de8430e58"
	Nov 21 14:42:26 default-k8s-diff-port-859276 kubelet[716]: E1121 14:42:26.763041     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2xhmq_kubernetes-dashboard(af77ab56-321e-4191-833a-c34cd80fb085)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2xhmq" podUID="af77ab56-321e-4191-833a-c34cd80fb085"
	Nov 21 14:42:33 default-k8s-diff-port-859276 kubelet[716]: I1121 14:42:33.589865     716 scope.go:117] "RemoveContainer" containerID="792ec4dce17b4ab7790240101a6b580c76469012712ad8190898f09de8430e58"
	Nov 21 14:42:33 default-k8s-diff-port-859276 kubelet[716]: E1121 14:42:33.590061     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2xhmq_kubernetes-dashboard(af77ab56-321e-4191-833a-c34cd80fb085)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2xhmq" podUID="af77ab56-321e-4191-833a-c34cd80fb085"
	Nov 21 14:42:42 default-k8s-diff-port-859276 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 21 14:42:42 default-k8s-diff-port-859276 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 21 14:42:42 default-k8s-diff-port-859276 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 21 14:42:42 default-k8s-diff-port-859276 systemd[1]: kubelet.service: Consumed 1.654s CPU time.
	
	
	==> kubernetes-dashboard [b4239d7c08405352938813c62cb6f15113973121a11d38199bd1bf0c93a47049] <==
	2025/11/21 14:42:00 Using namespace: kubernetes-dashboard
	2025/11/21 14:42:00 Using in-cluster config to connect to apiserver
	2025/11/21 14:42:00 Using secret token for csrf signing
	2025/11/21 14:42:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/21 14:42:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/21 14:42:00 Successful initial request to the apiserver, version: v1.34.1
	2025/11/21 14:42:00 Generating JWE encryption key
	2025/11/21 14:42:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/21 14:42:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/21 14:42:00 Initializing JWE encryption key from synchronized object
	2025/11/21 14:42:00 Creating in-cluster Sidecar client
	2025/11/21 14:42:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 14:42:00 Serving insecurely on HTTP port: 9090
	2025/11/21 14:42:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 14:42:00 Starting overwatch
	
	
	==> storage-provisioner [7bc1381c40424bf5c5ec170863470ce4dfb22c650dc079426153a476b7dc54fb] <==
	I1121 14:41:52.983129       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1121 14:42:22.987961       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d52774abd97b7c8a03b60d341da70eead219e69b9d2e38977c51930994e08e91] <==
	I1121 14:42:23.810124       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 14:42:23.817507       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 14:42:23.817623       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 14:42:23.819698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:27.275675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:31.536195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:35.135249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:38.189768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:41.212658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:41.216509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:42:41.216678       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 14:42:41.216805       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"527dbf43-5986-4266-9001-2722967aec7b", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-859276_3843f557-a18a-46d3-9585-b75033559ed1 became leader
	I1121 14:42:41.216821       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-859276_3843f557-a18a-46d3-9585-b75033559ed1!
	W1121 14:42:41.218391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:41.220984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 14:42:41.317013       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-859276_3843f557-a18a-46d3-9585-b75033559ed1!
	W1121 14:42:43.224917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:43.229918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:45.233961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:45.238077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:47.241314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:47.374730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:49.378772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 14:42:49.383338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-859276 -n default-k8s-diff-port-859276
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-859276 -n default-k8s-diff-port-859276: exit status 2 (322.69557ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-859276 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (8.64s)
E1121 14:42:58.549227   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:43:08.790993   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (263/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.75
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.2
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 3.94
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.2
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.38
21 TestBinaryMirror 0.79
22 TestOffline 51.01
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 123.22
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 8.4
48 TestAddons/StoppedEnableDisable 16.57
49 TestCertOptions 30.64
50 TestCertExpiration 220.31
52 TestForceSystemdFlag 29.05
53 TestForceSystemdEnv 28.57
58 TestErrorSpam/setup 20.9
59 TestErrorSpam/start 0.62
60 TestErrorSpam/status 0.9
61 TestErrorSpam/pause 6.1
62 TestErrorSpam/unpause 5.23
63 TestErrorSpam/stop 12.51
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 67.78
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.92
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.18
75 TestFunctional/serial/CacheCmd/cache/add_local 1.12
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.47
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 66.8
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.1
86 TestFunctional/serial/LogsFileCmd 1.11
87 TestFunctional/serial/InvalidService 4.22
89 TestFunctional/parallel/ConfigCmd 0.41
90 TestFunctional/parallel/DashboardCmd 8.16
91 TestFunctional/parallel/DryRun 0.36
92 TestFunctional/parallel/InternationalLanguage 0.19
93 TestFunctional/parallel/StatusCmd 0.91
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 24.62
101 TestFunctional/parallel/SSHCmd 0.6
102 TestFunctional/parallel/CpCmd 1.73
103 TestFunctional/parallel/MySQL 19.52
104 TestFunctional/parallel/FileSync 0.26
105 TestFunctional/parallel/CertSync 1.55
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
113 TestFunctional/parallel/License 0.47
114 TestFunctional/parallel/Version/short 0.07
115 TestFunctional/parallel/Version/components 0.56
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.96
120 TestFunctional/parallel/ImageCommands/ImageBuild 2.04
121 TestFunctional/parallel/ImageCommands/Setup 1.06
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.41
125 TestFunctional/parallel/ProfileCmd/profile_list 0.41
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.2
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/MountCmd/any-port 5.78
145 TestFunctional/parallel/MountCmd/specific-port 1.87
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.75
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
150 TestFunctional/parallel/ServiceCmd/List 1.68
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.68
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.01
162 TestMultiControlPlane/serial/StartCluster 116.96
163 TestMultiControlPlane/serial/DeployApp 4.02
164 TestMultiControlPlane/serial/PingHostFromPods 0.97
165 TestMultiControlPlane/serial/AddWorkerNode 24.12
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.83
168 TestMultiControlPlane/serial/CopyFile 16.24
169 TestMultiControlPlane/serial/StopSecondaryNode 13.16
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.74
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.83
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 106.86
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.42
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
176 TestMultiControlPlane/serial/StopCluster 48.44
177 TestMultiControlPlane/serial/RestartCluster 56.02
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
179 TestMultiControlPlane/serial/AddSecondaryNode 42.79
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
185 TestJSONOutput/start/Command 37.13
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.03
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 26.92
211 TestKicCustomNetwork/use_default_bridge_network 22.68
212 TestKicExistingNetwork 23.66
213 TestKicCustomSubnet 23.31
214 TestKicStaticIP 24.06
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 45.78
219 TestMountStart/serial/StartWithMountFirst 4.71
220 TestMountStart/serial/VerifyMountFirst 0.25
221 TestMountStart/serial/StartWithMountSecond 7.46
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.63
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.24
226 TestMountStart/serial/RestartStopped 7.33
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 90.78
231 TestMultiNode/serial/DeployApp2Nodes 3.33
232 TestMultiNode/serial/PingHostFrom2Pods 0.66
233 TestMultiNode/serial/AddNode 25.83
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.62
236 TestMultiNode/serial/CopyFile 9.25
237 TestMultiNode/serial/StopNode 2.17
238 TestMultiNode/serial/StartAfterStop 7.04
239 TestMultiNode/serial/RestartKeepsNodes 55.88
240 TestMultiNode/serial/DeleteNode 4.93
241 TestMultiNode/serial/StopMultiNode 17.54
242 TestMultiNode/serial/RestartMultiNode 26.13
243 TestMultiNode/serial/ValidateNameConflict 22.82
248 TestPreload 81.64
250 TestScheduledStopUnix 98.95
253 TestInsufficientStorage 12.44
254 TestRunningBinaryUpgrade 109
256 TestKubernetesUpgrade 300.66
257 TestMissingContainerUpgrade 126.23
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
260 TestNoKubernetes/serial/StartWithK8s 29.53
261 TestNoKubernetes/serial/StartWithStopK8s 28.34
262 TestNoKubernetes/serial/Start 10.16
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
265 TestNoKubernetes/serial/ProfileList 1.69
266 TestNoKubernetes/serial/Stop 1.29
267 TestNoKubernetes/serial/StartNoArgs 6.83
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
269 TestStoppedBinaryUpgrade/Setup 0.68
270 TestStoppedBinaryUpgrade/Upgrade 40.63
278 TestStoppedBinaryUpgrade/MinikubeLogs 1.01
286 TestNetworkPlugins/group/false 5.11
291 TestPause/serial/Start 43.55
293 TestStartStop/group/old-k8s-version/serial/FirstStart 51.04
294 TestPause/serial/SecondStartNoReconfiguration 5.98
297 TestStartStop/group/no-preload/serial/FirstStart 48.44
298 TestStartStop/group/old-k8s-version/serial/DeployApp 8.26
300 TestStartStop/group/old-k8s-version/serial/Stop 16.4
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
302 TestStartStop/group/old-k8s-version/serial/SecondStart 27.66
303 TestStartStop/group/no-preload/serial/DeployApp 8.22
305 TestStartStop/group/no-preload/serial/Stop 16.39
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
311 TestStartStop/group/no-preload/serial/SecondStart 47.46
313 TestStartStop/group/embed-certs/serial/FirstStart 42.32
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/embed-certs/serial/DeployApp 8.22
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
320 TestStartStop/group/embed-certs/serial/Stop 17.07
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 72.86
324 TestStartStop/group/newest-cni/serial/FirstStart 28.93
325 TestNetworkPlugins/group/auto/Start 46.69
326 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
327 TestStartStop/group/embed-certs/serial/SecondStart 27.95
328 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/Stop 2.99
331 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
332 TestStartStop/group/newest-cni/serial/SecondStart 10.58
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
339 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
341 TestNetworkPlugins/group/auto/KubeletFlags 0.32
342 TestNetworkPlugins/group/auto/NetCatPod 8.22
343 TestNetworkPlugins/group/kindnet/Start 42.67
344 TestNetworkPlugins/group/calico/Start 52.12
345 TestNetworkPlugins/group/auto/DNS 0.11
346 TestNetworkPlugins/group/auto/Localhost 0.09
347 TestNetworkPlugins/group/auto/HairPin 0.09
348 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.3
350 TestNetworkPlugins/group/custom-flannel/Start 49.21
351 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.3
352 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
354 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.43
355 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
356 TestNetworkPlugins/group/kindnet/NetCatPod 9.18
357 TestNetworkPlugins/group/calico/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/DNS 0.1
359 TestNetworkPlugins/group/kindnet/Localhost 0.09
360 TestNetworkPlugins/group/kindnet/HairPin 0.09
361 TestNetworkPlugins/group/calico/KubeletFlags 0.33
362 TestNetworkPlugins/group/calico/NetCatPod 9.28
363 TestNetworkPlugins/group/calico/DNS 0.11
364 TestNetworkPlugins/group/calico/Localhost 0.09
365 TestNetworkPlugins/group/calico/HairPin 0.09
366 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
367 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.19
368 TestNetworkPlugins/group/enable-default-cni/Start 67.45
369 TestNetworkPlugins/group/custom-flannel/DNS 0.31
370 TestNetworkPlugins/group/custom-flannel/Localhost 0.09
371 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
372 TestNetworkPlugins/group/flannel/Start 50.98
373 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
374 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
375 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
376 TestNetworkPlugins/group/bridge/Start 60.99
378 TestNetworkPlugins/group/flannel/ControllerPod 6.01
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.16
381 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
382 TestNetworkPlugins/group/flannel/NetCatPod 9.21
383 TestNetworkPlugins/group/enable-default-cni/DNS 0.1
384 TestNetworkPlugins/group/enable-default-cni/Localhost 0.08
385 TestNetworkPlugins/group/enable-default-cni/HairPin 0.08
386 TestNetworkPlugins/group/flannel/DNS 0.11
387 TestNetworkPlugins/group/flannel/Localhost 0.08
388 TestNetworkPlugins/group/flannel/HairPin 0.08
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
390 TestNetworkPlugins/group/bridge/NetCatPod 9.22
391 TestNetworkPlugins/group/bridge/DNS 0.16
392 TestNetworkPlugins/group/bridge/Localhost 0.09
393 TestNetworkPlugins/group/bridge/HairPin 0.09
x
+
TestDownloadOnly/v1.28.0/json-events (4.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-899209 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-899209 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.753714458s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1121 13:55:42.774087   14542 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1121 13:55:42.774171   14542 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-899209
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-899209: exit status 85 (66.245782ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-899209 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-899209 │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 13:55:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 13:55:38.068951   14554 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:55:38.069193   14554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:55:38.069202   14554 out.go:374] Setting ErrFile to fd 2...
	I1121 13:55:38.069206   14554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:55:38.069391   14554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	W1121 13:55:38.069535   14554 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21847-11045/.minikube/config/config.json: open /home/jenkins/minikube-integration/21847-11045/.minikube/config/config.json: no such file or directory
	I1121 13:55:38.069998   14554 out.go:368] Setting JSON to true
	I1121 13:55:38.070890   14554 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2287,"bootTime":1763731051,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 13:55:38.070967   14554 start.go:143] virtualization: kvm guest
	I1121 13:55:38.072951   14554 out.go:99] [download-only-899209] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1121 13:55:38.073087   14554 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball: no such file or directory
	I1121 13:55:38.073127   14554 notify.go:221] Checking for updates...
	I1121 13:55:38.074450   14554 out.go:171] MINIKUBE_LOCATION=21847
	I1121 13:55:38.075865   14554 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 13:55:38.077164   14554 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 13:55:38.078233   14554 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 13:55:38.079406   14554 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1121 13:55:38.081465   14554 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1121 13:55:38.081716   14554 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 13:55:38.105459   14554 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 13:55:38.105577   14554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:55:38.486527   14554 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-21 13:55:38.477057538 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 13:55:38.486651   14554 docker.go:319] overlay module found
	I1121 13:55:38.488235   14554 out.go:99] Using the docker driver based on user configuration
	I1121 13:55:38.488262   14554 start.go:309] selected driver: docker
	I1121 13:55:38.488273   14554 start.go:930] validating driver "docker" against <nil>
	I1121 13:55:38.488349   14554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:55:38.544797   14554 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-21 13:55:38.535879351 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 13:55:38.544936   14554 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 13:55:38.545399   14554 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1121 13:55:38.545555   14554 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1121 13:55:38.547126   14554 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-899209 host does not exist
	  To start a cluster, run: "minikube start -p download-only-899209"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-899209
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-145200 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-145200 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.937589538s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1121 13:55:47.114062   14542 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1121 13:55:47.114089   14542 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-145200
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-145200: exit status 85 (67.559902ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-899209 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-899209 │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │ 21 Nov 25 13:55 UTC │
	│ delete  │ -p download-only-899209                                                                                                                                                   │ download-only-899209 │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │ 21 Nov 25 13:55 UTC │
	│ start   │ -o=json --download-only -p download-only-145200 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-145200 │ jenkins │ v1.37.0 │ 21 Nov 25 13:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 13:55:43
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 13:55:43.225587   14911 out.go:360] Setting OutFile to fd 1 ...
	I1121 13:55:43.226229   14911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:55:43.226239   14911 out.go:374] Setting ErrFile to fd 2...
	I1121 13:55:43.226260   14911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 13:55:43.226448   14911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 13:55:43.226900   14911 out.go:368] Setting JSON to true
	I1121 13:55:43.227674   14911 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2292,"bootTime":1763731051,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 13:55:43.227748   14911 start.go:143] virtualization: kvm guest
	I1121 13:55:43.229296   14911 out.go:99] [download-only-145200] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 13:55:43.229444   14911 notify.go:221] Checking for updates...
	I1121 13:55:43.230370   14911 out.go:171] MINIKUBE_LOCATION=21847
	I1121 13:55:43.231403   14911 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 13:55:43.232475   14911 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 13:55:43.233388   14911 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 13:55:43.234388   14911 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1121 13:55:43.236094   14911 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1121 13:55:43.236300   14911 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 13:55:43.257752   14911 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 13:55:43.257858   14911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:55:43.311924   14911 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-21 13:55:43.303043852 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 13:55:43.312047   14911 docker.go:319] overlay module found
	I1121 13:55:43.313665   14911 out.go:99] Using the docker driver based on user configuration
	I1121 13:55:43.313697   14911 start.go:309] selected driver: docker
	I1121 13:55:43.313708   14911 start.go:930] validating driver "docker" against <nil>
	I1121 13:55:43.313803   14911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 13:55:43.369752   14911 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-21 13:55:43.361088722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 13:55:43.369937   14911 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 13:55:43.370427   14911 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1121 13:55:43.370614   14911 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1121 13:55:43.372497   14911 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-145200 host does not exist
	  To start a cluster, run: "minikube start -p download-only-145200"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-145200
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-032070 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-032070" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-032070
--- PASS: TestDownloadOnlyKic (0.38s)

                                                
                                    
x
+
TestBinaryMirror (0.79s)

                                                
                                                
=== RUN   TestBinaryMirror
I1121 13:55:48.163750   14542 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-248688 --alsologtostderr --binary-mirror http://127.0.0.1:45793 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-248688" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-248688
--- PASS: TestBinaryMirror (0.79s)

                                                
                                    
x
+
TestOffline (51.01s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-925222 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-925222 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (45.145309544s)
helpers_test.go:175: Cleaning up "offline-crio-925222" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-925222
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-925222: (5.865708423s)
--- PASS: TestOffline (51.01s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-243127
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-243127: exit status 85 (59.711123ms)

                                                
                                                
-- stdout --
	* Profile "addons-243127" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-243127"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-243127
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-243127: exit status 85 (58.645717ms)

                                                
                                                
-- stdout --
	* Profile "addons-243127" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-243127"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (123.22s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-243127 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-243127 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m3.224568246s)
--- PASS: TestAddons/Setup (123.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-243127 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-243127 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.4s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-243127 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-243127 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5ae2154d-5830-41e7-a8ff-ead5aef66f5c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5ae2154d-5830-41e7-a8ff-ead5aef66f5c] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003536706s
addons_test.go:694: (dbg) Run:  kubectl --context addons-243127 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-243127 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-243127 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.40s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.57s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-243127
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-243127: (16.307009554s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-243127
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-243127
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-243127
--- PASS: TestAddons/StoppedEnableDisable (16.57s)

                                                
                                    
x
+
TestCertOptions (30.64s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-116734 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-116734 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (27.49521037s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-116734 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-116734 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-116734 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-116734" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-116734
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-116734: (2.463677757s)
--- PASS: TestCertOptions (30.64s)

                                                
                                    
x
+
TestCertExpiration (220.31s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-046125 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-046125 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (30.559596968s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-046125 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-046125 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.996168482s)
helpers_test.go:175: Cleaning up "cert-expiration-046125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-046125
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-046125: (2.753008882s)
--- PASS: TestCertExpiration (220.31s)

                                                
                                    
x
+
TestForceSystemdFlag (29.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-085432 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-085432 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.325801365s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-085432 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-085432" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-085432
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-085432: (2.417422244s)
--- PASS: TestForceSystemdFlag (29.05s)

                                                
                                    
x
+
TestForceSystemdEnv (28.57s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-653926 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-653926 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.156265652s)
helpers_test.go:175: Cleaning up "force-systemd-env-653926" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-653926
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-653926: (2.409584358s)
--- PASS: TestForceSystemdEnv (28.57s)

                                                
                                    
x
+
TestErrorSpam/setup (20.9s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-019071 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-019071 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-019071 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-019071 --driver=docker  --container-runtime=crio: (20.89554679s)
--- PASS: TestErrorSpam/setup (20.90s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.9s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 status
--- PASS: TestErrorSpam/status (0.90s)

                                                
                                    
x
+
TestErrorSpam/pause (6.1s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 pause: exit status 80 (2.091001595s)

                                                
                                                
-- stdout --
	* Pausing node nospam-019071 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:01:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 pause: exit status 80 (2.095932074s)

                                                
                                                
-- stdout --
	* Pausing node nospam-019071 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:01:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 pause: exit status 80 (1.909798012s)

                                                
                                                
-- stdout --
	* Pausing node nospam-019071 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:01:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.10s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.23s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 unpause: exit status 80 (1.78324157s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-019071 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:01:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 unpause: exit status 80 (1.733341799s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-019071 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:01:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 unpause: exit status 80 (1.716253707s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-019071 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T14:01:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.23s)

                                                
                                    
x
+
TestErrorSpam/stop (12.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 stop: (12.313493304s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-019071 --log_dir /tmp/nospam-019071 stop
--- PASS: TestErrorSpam/stop (12.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21847-11045/.minikube/files/etc/test/nested/copy/14542/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (67.78s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-179014 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1121 14:02:52.778616   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:02:52.792294   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:02:52.803754   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:02:52.825062   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:02:52.866345   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:02:52.947697   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:02:53.109138   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:02:53.430792   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:02:54.072771   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:02:55.354076   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-179014 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m7.780073794s)
--- PASS: TestFunctional/serial/StartWithProxy (67.78s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.92s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1121 14:02:56.841976   14542 config.go:182] Loaded profile config "functional-179014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-179014 --alsologtostderr -v=8
E1121 14:02:57.916105   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-179014 --alsologtostderr -v=8: (5.919594649s)
functional_test.go:678: soft start took 5.920229474s for "functional-179014" cluster.
I1121 14:03:02.761885   14542 config.go:182] Loaded profile config "functional-179014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (5.92s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-179014 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 cache add registry.k8s.io/pause:3.1
E1121 14:03:03.037858   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-179014 cache add registry.k8s.io/pause:3.1: (1.092344329s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-179014 cache add registry.k8s.io/pause:latest: (1.128388115s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-179014 /tmp/TestFunctionalserialCacheCmdcacheadd_local362017292/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 cache add minikube-local-cache-test:functional-179014
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 cache delete minikube-local-cache-test:functional-179014
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-179014
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179014 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (268.667981ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 kubectl -- --context functional-179014 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-179014 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (66.8s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-179014 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1121 14:03:13.280090   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:03:33.761602   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:04:14.724755   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-179014 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m6.801405011s)
functional_test.go:776: restart took 1m6.801506809s for "functional-179014" cluster.
I1121 14:04:16.178719   14542 config.go:182] Loaded profile config "functional-179014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (66.80s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-179014 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-179014 logs: (1.095899206s)
--- PASS: TestFunctional/serial/LogsCmd (1.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 logs --file /tmp/TestFunctionalserialLogsFileCmd758063151/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-179014 logs --file /tmp/TestFunctionalserialLogsFileCmd758063151/001/logs.txt: (1.112661947s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.11s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.22s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-179014 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-179014
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-179014: exit status 115 (321.691645ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31616 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-179014 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179014 config get cpus: exit status 14 (80.067633ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179014 config get cpus: exit status 14 (68.630827ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-179014 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-179014 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 52947: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.16s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-179014 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-179014 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (153.452496ms)

                                                
                                                
-- stdout --
	* [functional-179014] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:04:44.427956   52540 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:04:44.428192   52540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:04:44.428201   52540 out.go:374] Setting ErrFile to fd 2...
	I1121 14:04:44.428205   52540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:04:44.428371   52540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:04:44.428753   52540 out.go:368] Setting JSON to false
	I1121 14:04:44.429611   52540 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2833,"bootTime":1763731051,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:04:44.429693   52540 start.go:143] virtualization: kvm guest
	I1121 14:04:44.431094   52540 out.go:179] * [functional-179014] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:04:44.432167   52540 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:04:44.432171   52540 notify.go:221] Checking for updates...
	I1121 14:04:44.433239   52540 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:04:44.434611   52540 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:04:44.435828   52540 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 14:04:44.436845   52540 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:04:44.437861   52540 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:04:44.439162   52540 config.go:182] Loaded profile config "functional-179014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:04:44.439683   52540 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:04:44.462200   52540 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:04:44.462288   52540 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:04:44.520502   52540 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-21 14:04:44.510452362 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:04:44.520655   52540 docker.go:319] overlay module found
	I1121 14:04:44.522157   52540 out.go:179] * Using the docker driver based on existing profile
	I1121 14:04:44.523089   52540 start.go:309] selected driver: docker
	I1121 14:04:44.523106   52540 start.go:930] validating driver "docker" against &{Name:functional-179014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-179014 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:04:44.523179   52540 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:04:44.524505   52540 out.go:203] 
	W1121 14:04:44.525422   52540 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1121 14:04:44.526330   52540 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-179014 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-179014 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-179014 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (189.528999ms)

                                                
                                                
-- stdout --
	* [functional-179014] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:04:32.137244   48824 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:04:32.137360   48824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:04:32.137372   48824 out.go:374] Setting ErrFile to fd 2...
	I1121 14:04:32.137380   48824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:04:32.138039   48824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:04:32.138746   48824 out.go:368] Setting JSON to false
	I1121 14:04:32.140088   48824 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2821,"bootTime":1763731051,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:04:32.140173   48824 start.go:143] virtualization: kvm guest
	I1121 14:04:32.142742   48824 out.go:179] * [functional-179014] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1121 14:04:32.143857   48824 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:04:32.143927   48824 notify.go:221] Checking for updates...
	I1121 14:04:32.146475   48824 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:04:32.147909   48824 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:04:32.149107   48824 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 14:04:32.150265   48824 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:04:32.151453   48824 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:04:32.152793   48824 config.go:182] Loaded profile config "functional-179014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:04:32.153529   48824 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:04:32.181244   48824 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:04:32.181348   48824 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:04:32.245625   48824 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-21 14:04:32.235181818 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:04:32.245801   48824 docker.go:319] overlay module found
	I1121 14:04:32.247613   48824 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1121 14:04:32.248653   48824 start.go:309] selected driver: docker
	I1121 14:04:32.248671   48824 start.go:930] validating driver "docker" against &{Name:functional-179014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-179014 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 14:04:32.248783   48824 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:04:32.250509   48824 out.go:203] 
	W1121 14:04:32.251467   48824 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1121 14:04:32.252422   48824 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [c4617cc2-57cc-4d0a-b417-3439f986ec61] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003071343s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-179014 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-179014 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-179014 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-179014 apply -f testdata/storage-provisioner/pod.yaml
I1121 14:04:29.633727   14542 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [2a527c9c-856b-46fa-8863-5a8c2059ba6f] Pending
helpers_test.go:352: "sp-pod" [2a527c9c-856b-46fa-8863-5a8c2059ba6f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [2a527c9c-856b-46fa-8863-5a8c2059ba6f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004047992s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-179014 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-179014 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-179014 apply -f testdata/storage-provisioner/pod.yaml
I1121 14:04:40.829662   14542 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [dc40633a-d8d5-480a-b78b-a04deb751cda] Pending
helpers_test.go:352: "sp-pod" [dc40633a-d8d5-480a-b78b-a04deb751cda] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [dc40633a-d8d5-480a-b78b-a04deb751cda] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003072771s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-179014 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.62s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh -n functional-179014 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 cp functional-179014:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2841785318/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh -n functional-179014 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh -n functional-179014 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-179014 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-8ftp8" [76927e62-76ad-4ca5-a794-c15c4ddcd3a7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
2025/11/21 14:04:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "mysql-5bb876957f-8ftp8" [76927e62-76ad-4ca5-a794-c15c4ddcd3a7] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.044694047s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-179014 exec mysql-5bb876957f-8ftp8 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-179014 exec mysql-5bb876957f-8ftp8 -- mysql -ppassword -e "show databases;": exit status 1 (81.207895ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1121 14:05:04.209721   14542 retry.go:31] will retry after 1.056432994s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-179014 exec mysql-5bb876957f-8ftp8 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-179014 exec mysql-5bb876957f-8ftp8 -- mysql -ppassword -e "show databases;": exit status 1 (80.88183ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1121 14:05:05.347721   14542 retry.go:31] will retry after 1.999136707s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-179014 exec mysql-5bb876957f-8ftp8 -- mysql -ppassword -e "show databases;"
E1121 14:05:36.646796   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:07:52.778187   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:08:20.488281   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:12:52.777572   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (19.52s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/14542/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh "sudo cat /etc/test/nested/copy/14542/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/14542.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh "sudo cat /etc/ssl/certs/14542.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/14542.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh "sudo cat /usr/share/ca-certificates/14542.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/145422.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh "sudo cat /etc/ssl/certs/145422.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/145422.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh "sudo cat /usr/share/ca-certificates/145422.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-179014 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179014 ssh "sudo systemctl is-active docker": exit status 1 (309.257414ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179014 ssh "sudo systemctl is-active containerd": exit status 1 (310.572144ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-179014 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ localhost/my-image                      │ functional-179014  │ 97df052ef0627 │ 1.47MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-179014 image ls --format table --alsologtostderr:
I1121 14:05:00.834386   55067 out.go:360] Setting OutFile to fd 1 ...
I1121 14:05:00.834651   55067 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:05:00.834660   55067 out.go:374] Setting ErrFile to fd 2...
I1121 14:05:00.834664   55067 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:05:00.834887   55067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
I1121 14:05:00.835425   55067 config.go:182] Loaded profile config "functional-179014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 14:05:00.835533   55067 config.go:182] Loaded profile config "functional-179014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 14:05:00.835916   55067 cli_runner.go:164] Run: docker container inspect functional-179014 --format={{.State.Status}}
I1121 14:05:00.853009   55067 ssh_runner.go:195] Run: systemctl --version
I1121 14:05:00.853048   55067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-179014
I1121 14:05:00.869412   55067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/functional-179014/id_rsa Username:docker}
I1121 14:05:00.961385   55067 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-179014 image ls --format json --alsologtostderr:
[{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367b
f5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"30b023bbbcb9bd6a32f56d54ec330e59be9c1602555370f2f5b4753d5b107cf7","repoDigests":["docker.io/library/dbd597afe3c01e9541bd8fdf7ee2334f24f623bf8696e5f33ccae4974833e985-tmp@sha256:72d37e8b108a68a3e8d524c621a59f15ab8f499d3fafb0a1d875dcaf3b3902fe"],"repoTags":[],"size":"1466132"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:
latest"],"size":"155491845"},{"id":"97df052ef0627d617184c6f1f58f57aa696c4ea262b389e487ed8c6d89637e7c","repoDigests":["localhost/my-image@sha256:81df6812f2e51d50932afa97a75fcbb590a1fc2aadeb5ddc750fd8abd8d53e3e"],"repoTags":["localhost/my-image:functional-179014"],"size":"1468744"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"}
,{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e987
65c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529
c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c99
2"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-179014 image ls --format json --alsologtostderr:
I1121 14:05:00.624964   55011 out.go:360] Setting OutFile to fd 1 ...
I1121 14:05:00.625172   55011 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:05:00.625180   55011 out.go:374] Setting ErrFile to fd 2...
I1121 14:05:00.625184   55011 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:05:00.625350   55011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
I1121 14:05:00.625876   55011 config.go:182] Loaded profile config "functional-179014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 14:05:00.625971   55011 config.go:182] Loaded profile config "functional-179014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 14:05:00.626304   55011 cli_runner.go:164] Run: docker container inspect functional-179014 --format={{.State.Status}}
I1121 14:05:00.643093   55011 ssh_runner.go:195] Run: systemctl --version
I1121 14:05:00.643138   55011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-179014
I1121 14:05:00.658994   55011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/functional-179014/id_rsa Username:docker}
I1121 14:05:00.750401   55011 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-179014 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-179014 image ls --format yaml --alsologtostderr:
I1121 14:04:57.638785   54276 out.go:360] Setting OutFile to fd 1 ...
I1121 14:04:57.639056   54276 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:04:57.639069   54276 out.go:374] Setting ErrFile to fd 2...
I1121 14:04:57.639076   54276 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:04:57.639384   54276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
I1121 14:04:57.640154   54276 config.go:182] Loaded profile config "functional-179014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 14:04:57.640299   54276 config.go:182] Loaded profile config "functional-179014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 14:04:57.640932   54276 cli_runner.go:164] Run: docker container inspect functional-179014 --format={{.State.Status}}
I1121 14:04:57.662004   54276 ssh_runner.go:195] Run: systemctl --version
I1121 14:04:57.662054   54276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-179014
I1121 14:04:57.682303   54276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/functional-179014/id_rsa Username:docker}
I1121 14:04:57.786139   54276 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179014 ssh pgrep buildkitd: exit status 1 (264.248395ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 image build -t localhost/my-image:functional-179014 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-179014 image build -t localhost/my-image:functional-179014 testdata/build --alsologtostderr: (1.565794432s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-179014 image build -t localhost/my-image:functional-179014 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 30b023bbbcb
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-179014
--> 97df052ef06
Successfully tagged localhost/my-image:functional-179014
97df052ef0627d617184c6f1f58f57aa696c4ea262b389e487ed8c6d89637e7c
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-179014 image build -t localhost/my-image:functional-179014 testdata/build --alsologtostderr:
I1121 14:04:58.848262   54500 out.go:360] Setting OutFile to fd 1 ...
I1121 14:04:58.848534   54500 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:04:58.848543   54500 out.go:374] Setting ErrFile to fd 2...
I1121 14:04:58.848548   54500 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 14:04:58.848744   54500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
I1121 14:04:58.849266   54500 config.go:182] Loaded profile config "functional-179014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 14:04:58.849871   54500 config.go:182] Loaded profile config "functional-179014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 14:04:58.850221   54500 cli_runner.go:164] Run: docker container inspect functional-179014 --format={{.State.Status}}
I1121 14:04:58.868006   54500 ssh_runner.go:195] Run: systemctl --version
I1121 14:04:58.868048   54500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-179014
I1121 14:04:58.884703   54500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/functional-179014/id_rsa Username:docker}
I1121 14:04:58.976382   54500 build_images.go:162] Building image from path: /tmp/build.4176136261.tar
I1121 14:04:58.976445   54500 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1121 14:04:58.984006   54500 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4176136261.tar
I1121 14:04:58.987673   54500 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4176136261.tar: stat -c "%s %y" /var/lib/minikube/build/build.4176136261.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4176136261.tar': No such file or directory
I1121 14:04:58.987695   54500 ssh_runner.go:362] scp /tmp/build.4176136261.tar --> /var/lib/minikube/build/build.4176136261.tar (3072 bytes)
I1121 14:04:59.004020   54500 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4176136261
I1121 14:04:59.010901   54500 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4176136261 -xf /var/lib/minikube/build/build.4176136261.tar
I1121 14:04:59.018202   54500 crio.go:315] Building image: /var/lib/minikube/build/build.4176136261
I1121 14:04:59.018245   54500 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-179014 /var/lib/minikube/build/build.4176136261 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1121 14:05:00.341104   54500 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-179014 /var/lib/minikube/build/build.4176136261 --cgroup-manager=cgroupfs: (1.32284171s)
I1121 14:05:00.341155   54500 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4176136261
I1121 14:05:00.348862   54500 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4176136261.tar
I1121 14:05:00.355780   54500 build_images.go:218] Built localhost/my-image:functional-179014 from /tmp/build.4176136261.tar
I1121 14:05:00.355805   54500 build_images.go:134] succeeded building to: functional-179014
I1121 14:05:00.355811   54500 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.032482635s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-179014
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-179014 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-179014 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-179014 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-179014 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 47002: os: process already finished
helpers_test.go:519: unable to terminate pid 46730: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "347.056685ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "62.103083ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-179014 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-179014 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [0c9fb286-29ef-439f-a881-3951b25172e6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [0c9fb286-29ef-439f-a881-3951b25172e6] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004034146s
I1121 14:04:31.886832   14542 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "340.428559ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "56.592165ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 image rm kicbase/echo-server:functional-179014 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-179014 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.170.39 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-179014 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-179014 /tmp/TestFunctionalparallelMountCmdany-port4292075027/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763733872259938505" to /tmp/TestFunctionalparallelMountCmdany-port4292075027/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763733872259938505" to /tmp/TestFunctionalparallelMountCmdany-port4292075027/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763733872259938505" to /tmp/TestFunctionalparallelMountCmdany-port4292075027/001/test-1763733872259938505
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179014 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (313.489431ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1121 14:04:32.573752   14542 retry.go:31] will retry after 635.421719ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 21 14:04 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 21 14:04 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 21 14:04 test-1763733872259938505
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh cat /mount-9p/test-1763733872259938505
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-179014 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [73946813-241a-4d9d-974f-bf444469e5e1] Pending
helpers_test.go:352: "busybox-mount" [73946813-241a-4d9d-974f-bf444469e5e1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [73946813-241a-4d9d-974f-bf444469e5e1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [73946813-241a-4d9d-974f-bf444469e5e1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.00335945s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-179014 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-179014 /tmp/TestFunctionalparallelMountCmdany-port4292075027/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-179014 /tmp/TestFunctionalparallelMountCmdspecific-port2267905910/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179014 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (266.403435ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1121 14:04:38.310330   14542 retry.go:31] will retry after 598.089551ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-179014 /tmp/TestFunctionalparallelMountCmdspecific-port2267905910/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179014 ssh "sudo umount -f /mount-9p": exit status 1 (272.984461ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-179014 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-179014 /tmp/TestFunctionalparallelMountCmdspecific-port2267905910/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-179014 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2975248641/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-179014 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2975248641/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-179014 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2975248641/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179014 ssh "findmnt -T" /mount1: exit status 1 (325.812755ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1121 14:04:40.236352   14542 retry.go:31] will retry after 593.282263ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-179014 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-179014 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2975248641/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-179014 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2975248641/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-179014 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2975248641/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-179014 service list: (1.683954747s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-179014 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-179014 service list -o json: (1.682556771s)
functional_test.go:1504: Took "1.682643788s" to run "out/minikube-linux-amd64 -p functional-179014 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-179014
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-179014
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-179014
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (116.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-708226 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m56.273087319s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (116.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-708226 kubectl -- rollout status deployment/busybox: (2.058311831s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 kubectl -- exec busybox-7b57f96db7-g4m7b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 kubectl -- exec busybox-7b57f96db7-m8rcr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 kubectl -- exec busybox-7b57f96db7-zbfcj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 kubectl -- exec busybox-7b57f96db7-g4m7b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 kubectl -- exec busybox-7b57f96db7-m8rcr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 kubectl -- exec busybox-7b57f96db7-zbfcj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 kubectl -- exec busybox-7b57f96db7-g4m7b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 kubectl -- exec busybox-7b57f96db7-m8rcr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 kubectl -- exec busybox-7b57f96db7-zbfcj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 kubectl -- exec busybox-7b57f96db7-g4m7b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 kubectl -- exec busybox-7b57f96db7-g4m7b -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 kubectl -- exec busybox-7b57f96db7-m8rcr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 kubectl -- exec busybox-7b57f96db7-m8rcr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 kubectl -- exec busybox-7b57f96db7-zbfcj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 kubectl -- exec busybox-7b57f96db7-zbfcj -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-708226 node add --alsologtostderr -v 5: (23.286982319s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-708226 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 cp testdata/cp-test.txt ha-708226:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 cp ha-708226:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1198064985/001/cp-test_ha-708226.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 cp ha-708226:/home/docker/cp-test.txt ha-708226-m02:/home/docker/cp-test_ha-708226_ha-708226-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m02 "sudo cat /home/docker/cp-test_ha-708226_ha-708226-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 cp ha-708226:/home/docker/cp-test.txt ha-708226-m03:/home/docker/cp-test_ha-708226_ha-708226-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m03 "sudo cat /home/docker/cp-test_ha-708226_ha-708226-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 cp ha-708226:/home/docker/cp-test.txt ha-708226-m04:/home/docker/cp-test_ha-708226_ha-708226-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m04 "sudo cat /home/docker/cp-test_ha-708226_ha-708226-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 cp testdata/cp-test.txt ha-708226-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 cp ha-708226-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1198064985/001/cp-test_ha-708226-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 cp ha-708226-m02:/home/docker/cp-test.txt ha-708226:/home/docker/cp-test_ha-708226-m02_ha-708226.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226 "sudo cat /home/docker/cp-test_ha-708226-m02_ha-708226.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 cp ha-708226-m02:/home/docker/cp-test.txt ha-708226-m03:/home/docker/cp-test_ha-708226-m02_ha-708226-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m03 "sudo cat /home/docker/cp-test_ha-708226-m02_ha-708226-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 cp ha-708226-m02:/home/docker/cp-test.txt ha-708226-m04:/home/docker/cp-test_ha-708226-m02_ha-708226-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m04 "sudo cat /home/docker/cp-test_ha-708226-m02_ha-708226-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 cp testdata/cp-test.txt ha-708226-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 cp ha-708226-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1198064985/001/cp-test_ha-708226-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 cp ha-708226-m03:/home/docker/cp-test.txt ha-708226:/home/docker/cp-test_ha-708226-m03_ha-708226.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226 "sudo cat /home/docker/cp-test_ha-708226-m03_ha-708226.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 cp ha-708226-m03:/home/docker/cp-test.txt ha-708226-m02:/home/docker/cp-test_ha-708226-m03_ha-708226-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m02 "sudo cat /home/docker/cp-test_ha-708226-m03_ha-708226-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 cp ha-708226-m03:/home/docker/cp-test.txt ha-708226-m04:/home/docker/cp-test_ha-708226-m03_ha-708226-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m04 "sudo cat /home/docker/cp-test_ha-708226-m03_ha-708226-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 cp testdata/cp-test.txt ha-708226-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 cp ha-708226-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1198064985/001/cp-test_ha-708226-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 cp ha-708226-m04:/home/docker/cp-test.txt ha-708226:/home/docker/cp-test_ha-708226-m04_ha-708226.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226 "sudo cat /home/docker/cp-test_ha-708226-m04_ha-708226.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 cp ha-708226-m04:/home/docker/cp-test.txt ha-708226-m02:/home/docker/cp-test_ha-708226-m04_ha-708226-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m02 "sudo cat /home/docker/cp-test_ha-708226-m04_ha-708226-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 cp ha-708226-m04:/home/docker/cp-test.txt ha-708226-m03:/home/docker/cp-test_ha-708226-m04_ha-708226-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 ssh -n ha-708226-m03 "sudo cat /home/docker/cp-test_ha-708226-m04_ha-708226-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-708226 node stop m02 --alsologtostderr -v 5: (12.499485651s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-708226 status --alsologtostderr -v 5: exit status 7 (656.706414ms)

                                                
                                                
-- stdout --
	ha-708226
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-708226-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-708226-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-708226-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:17:34.986323   79256 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:17:34.986614   79256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:17:34.986624   79256 out.go:374] Setting ErrFile to fd 2...
	I1121 14:17:34.986630   79256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:17:34.986818   79256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:17:34.987003   79256 out.go:368] Setting JSON to false
	I1121 14:17:34.987039   79256 mustload.go:66] Loading cluster: ha-708226
	I1121 14:17:34.987124   79256 notify.go:221] Checking for updates...
	I1121 14:17:34.987426   79256 config.go:182] Loaded profile config "ha-708226": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:17:34.987442   79256 status.go:174] checking status of ha-708226 ...
	I1121 14:17:34.987902   79256 cli_runner.go:164] Run: docker container inspect ha-708226 --format={{.State.Status}}
	I1121 14:17:35.006210   79256 status.go:371] ha-708226 host status = "Running" (err=<nil>)
	I1121 14:17:35.006240   79256 host.go:66] Checking if "ha-708226" exists ...
	I1121 14:17:35.006505   79256 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-708226
	I1121 14:17:35.024664   79256 host.go:66] Checking if "ha-708226" exists ...
	I1121 14:17:35.024923   79256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:17:35.024970   79256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-708226
	I1121 14:17:35.041069   79256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/ha-708226/id_rsa Username:docker}
	I1121 14:17:35.132351   79256 ssh_runner.go:195] Run: systemctl --version
	I1121 14:17:35.138199   79256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:17:35.149423   79256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:17:35.205749   79256 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 14:17:35.196453823 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:17:35.206348   79256 kubeconfig.go:125] found "ha-708226" server: "https://192.168.49.254:8443"
	I1121 14:17:35.206385   79256 api_server.go:166] Checking apiserver status ...
	I1121 14:17:35.206429   79256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:17:35.217708   79256 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1258/cgroup
	W1121 14:17:35.225799   79256 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1258/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:17:35.225832   79256 ssh_runner.go:195] Run: ls
	I1121 14:17:35.229210   79256 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1121 14:17:35.233191   79256 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1121 14:17:35.233208   79256 status.go:463] ha-708226 apiserver status = Running (err=<nil>)
	I1121 14:17:35.233216   79256 status.go:176] ha-708226 status: &{Name:ha-708226 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:17:35.233229   79256 status.go:174] checking status of ha-708226-m02 ...
	I1121 14:17:35.233432   79256 cli_runner.go:164] Run: docker container inspect ha-708226-m02 --format={{.State.Status}}
	I1121 14:17:35.250728   79256 status.go:371] ha-708226-m02 host status = "Stopped" (err=<nil>)
	I1121 14:17:35.250746   79256 status.go:384] host is not running, skipping remaining checks
	I1121 14:17:35.250753   79256 status.go:176] ha-708226-m02 status: &{Name:ha-708226-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:17:35.250773   79256 status.go:174] checking status of ha-708226-m03 ...
	I1121 14:17:35.251020   79256 cli_runner.go:164] Run: docker container inspect ha-708226-m03 --format={{.State.Status}}
	I1121 14:17:35.268666   79256 status.go:371] ha-708226-m03 host status = "Running" (err=<nil>)
	I1121 14:17:35.268686   79256 host.go:66] Checking if "ha-708226-m03" exists ...
	I1121 14:17:35.268978   79256 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-708226-m03
	I1121 14:17:35.285675   79256 host.go:66] Checking if "ha-708226-m03" exists ...
	I1121 14:17:35.285928   79256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:17:35.285975   79256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-708226-m03
	I1121 14:17:35.303213   79256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/ha-708226-m03/id_rsa Username:docker}
	I1121 14:17:35.394419   79256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:17:35.406511   79256 kubeconfig.go:125] found "ha-708226" server: "https://192.168.49.254:8443"
	I1121 14:17:35.406534   79256 api_server.go:166] Checking apiserver status ...
	I1121 14:17:35.406579   79256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:17:35.416701   79256 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1168/cgroup
	W1121 14:17:35.424083   79256 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1168/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:17:35.424120   79256 ssh_runner.go:195] Run: ls
	I1121 14:17:35.427813   79256 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1121 14:17:35.432446   79256 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1121 14:17:35.432467   79256 status.go:463] ha-708226-m03 apiserver status = Running (err=<nil>)
	I1121 14:17:35.432484   79256 status.go:176] ha-708226-m03 status: &{Name:ha-708226-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:17:35.432500   79256 status.go:174] checking status of ha-708226-m04 ...
	I1121 14:17:35.432745   79256 cli_runner.go:164] Run: docker container inspect ha-708226-m04 --format={{.State.Status}}
	I1121 14:17:35.449526   79256 status.go:371] ha-708226-m04 host status = "Running" (err=<nil>)
	I1121 14:17:35.449548   79256 host.go:66] Checking if "ha-708226-m04" exists ...
	I1121 14:17:35.449852   79256 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-708226-m04
	I1121 14:17:35.466828   79256 host.go:66] Checking if "ha-708226-m04" exists ...
	I1121 14:17:35.467122   79256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:17:35.467166   79256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-708226-m04
	I1121 14:17:35.483631   79256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/ha-708226-m04/id_rsa Username:docker}
	I1121 14:17:35.574220   79256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:17:35.585706   79256 status.go:176] ha-708226-m04 status: &{Name:ha-708226-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-708226 node start m02 --alsologtostderr -v 5: (7.845418913s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 stop --alsologtostderr -v 5
E1121 14:17:52.780072   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-708226 stop --alsologtostderr -v 5: (48.035828466s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 start --wait true --alsologtostderr -v 5
E1121 14:19:15.850014   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:19:23.291760   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:19:23.298128   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:19:23.309471   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:19:23.330804   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:19:23.372145   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:19:23.453484   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:19:23.614926   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:19:23.936597   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:19:24.578230   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:19:25.859686   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:19:28.421850   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-708226 start --wait true --alsologtostderr -v 5: (58.70229194s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 node delete m03 --alsologtostderr -v 5
E1121 14:19:33.543847   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-708226 node delete m03 --alsologtostderr -v 5: (9.635982054s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (48.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 stop --alsologtostderr -v 5
E1121 14:19:43.785415   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:20:04.267451   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-708226 stop --alsologtostderr -v 5: (48.327945483s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-708226 status --alsologtostderr -v 5: exit status 7 (110.378266ms)

                                                
                                                
-- stdout --
	ha-708226
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-708226-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-708226-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:20:32.150404   94014 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:20:32.150684   94014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:20:32.150694   94014 out.go:374] Setting ErrFile to fd 2...
	I1121 14:20:32.150698   94014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:20:32.150861   94014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:20:32.151016   94014 out.go:368] Setting JSON to false
	I1121 14:20:32.151045   94014 mustload.go:66] Loading cluster: ha-708226
	I1121 14:20:32.151128   94014 notify.go:221] Checking for updates...
	I1121 14:20:32.151415   94014 config.go:182] Loaded profile config "ha-708226": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:20:32.151432   94014 status.go:174] checking status of ha-708226 ...
	I1121 14:20:32.151924   94014 cli_runner.go:164] Run: docker container inspect ha-708226 --format={{.State.Status}}
	I1121 14:20:32.172081   94014 status.go:371] ha-708226 host status = "Stopped" (err=<nil>)
	I1121 14:20:32.172117   94014 status.go:384] host is not running, skipping remaining checks
	I1121 14:20:32.172125   94014 status.go:176] ha-708226 status: &{Name:ha-708226 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:20:32.172163   94014 status.go:174] checking status of ha-708226-m02 ...
	I1121 14:20:32.172392   94014 cli_runner.go:164] Run: docker container inspect ha-708226-m02 --format={{.State.Status}}
	I1121 14:20:32.188332   94014 status.go:371] ha-708226-m02 host status = "Stopped" (err=<nil>)
	I1121 14:20:32.188346   94014 status.go:384] host is not running, skipping remaining checks
	I1121 14:20:32.188351   94014 status.go:176] ha-708226-m02 status: &{Name:ha-708226-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:20:32.188364   94014 status.go:174] checking status of ha-708226-m04 ...
	I1121 14:20:32.188574   94014 cli_runner.go:164] Run: docker container inspect ha-708226-m04 --format={{.State.Status}}
	I1121 14:20:32.204605   94014 status.go:371] ha-708226-m04 host status = "Stopped" (err=<nil>)
	I1121 14:20:32.204629   94014 status.go:384] host is not running, skipping remaining checks
	I1121 14:20:32.204634   94014 status.go:176] ha-708226-m04 status: &{Name:ha-708226-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (48.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (56.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1121 14:20:45.229188   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-708226 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (55.255766084s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (56.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (42.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 node add --control-plane --alsologtostderr -v 5
E1121 14:22:07.151196   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-708226 node add --control-plane --alsologtostderr -v 5: (41.95834617s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-708226 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (42.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (37.13s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-196016 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1121 14:22:52.779683   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-196016 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (37.130868266s)
--- PASS: TestJSONOutput/start/Command (37.13s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.03s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-196016 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-196016 --output=json --user=testUser: (6.029683s)
--- PASS: TestJSONOutput/stop/Command (6.03s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-483742 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-483742 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (74.561006ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8f8505b3-9e87-4688-a3af-197b751a4d57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-483742] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fe771103-d0bb-4cf2-8d69-3d4254e630a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21847"}}
	{"specversion":"1.0","id":"20ce1e82-fcab-45e4-9f0e-9c9bfe984b08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3fed37aa-bee1-414e-9326-ebf129ef51f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig"}}
	{"specversion":"1.0","id":"e4b4e9e9-a7db-444e-9d5c-93b22ad4c636","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube"}}
	{"specversion":"1.0","id":"d065a06f-95e6-42c8-a7e8-cce0b8b99a1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e6ce81d4-0a9d-45df-9ab4-2d89c26b5eec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d37a9379-07ac-43e4-8650-62b36255f512","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-483742" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-483742
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-078355 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-078355 --network=: (24.79242992s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-078355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-078355
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-078355: (2.103228804s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.92s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.68s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-157467 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-157467 --network=bridge: (20.709903973s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-157467" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-157467
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-157467: (1.95003549s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.68s)

                                                
                                    
x
+
TestKicExistingNetwork (23.66s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1121 14:24:02.194080   14542 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1121 14:24:02.209924   14542 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1121 14:24:02.209993   14542 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1121 14:24:02.210026   14542 cli_runner.go:164] Run: docker network inspect existing-network
W1121 14:24:02.224991   14542 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1121 14:24:02.225014   14542 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1121 14:24:02.225034   14542 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1121 14:24:02.225144   14542 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1121 14:24:02.240298   14542 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-28b1c9d83f01 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:19:47:f8:32:b5} reservation:<nil>}
I1121 14:24:02.240698   14542 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0004a4a60}
I1121 14:24:02.240722   14542 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1121 14:24:02.240771   14542 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1121 14:24:02.284267   14542 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-285955 --network=existing-network
E1121 14:24:23.291502   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-285955 --network=existing-network: (21.582802078s)
helpers_test.go:175: Cleaning up "existing-network-285955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-285955
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-285955: (1.957059433s)
I1121 14:24:25.840025   14542 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.66s)

                                                
                                    
x
+
TestKicCustomSubnet (23.31s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-417349 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-417349 --subnet=192.168.60.0/24: (21.212787315s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-417349 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-417349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-417349
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-417349: (2.075366596s)
--- PASS: TestKicCustomSubnet (23.31s)

                                                
                                    
x
+
TestKicStaticIP (24.06s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-668344 --static-ip=192.168.200.200
E1121 14:24:50.993742   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-668344 --static-ip=192.168.200.200: (21.862058624s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-668344 ip
helpers_test.go:175: Cleaning up "static-ip-668344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-668344
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-668344: (2.053374419s)
--- PASS: TestKicStaticIP (24.06s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (45.78s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-018733 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-018733 --driver=docker  --container-runtime=crio: (20.005537418s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-021423 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-021423 --driver=docker  --container-runtime=crio: (20.020120111s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-018733
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-021423
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-021423" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-021423
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-021423: (2.276642926s)
helpers_test.go:175: Cleaning up "first-018733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-018733
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-018733: (2.319146095s)
--- PASS: TestMinikubeProfile (45.78s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.71s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-934282 --memory=3072 --mount-string /tmp/TestMountStartserial2782438312/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-934282 --memory=3072 --mount-string /tmp/TestMountStartserial2782438312/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.713047123s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-934282 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-949625 --memory=3072 --mount-string /tmp/TestMountStartserial2782438312/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-949625 --memory=3072 --mount-string /tmp/TestMountStartserial2782438312/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.459705921s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-949625 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-934282 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-934282 --alsologtostderr -v=5: (1.625315911s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-949625 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-949625
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-949625: (1.236445945s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.33s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-949625
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-949625: (6.333029886s)
--- PASS: TestMountStart/serial/RestartStopped (7.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-949625 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (90.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-384928 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1121 14:27:52.777666   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-384928 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m30.316066129s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (90.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-384928 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-384928 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-384928 -- rollout status deployment/busybox: (2.002052424s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-384928 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-384928 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-384928 -- exec busybox-7b57f96db7-pnltn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-384928 -- exec busybox-7b57f96db7-wdxr7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-384928 -- exec busybox-7b57f96db7-pnltn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-384928 -- exec busybox-7b57f96db7-wdxr7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-384928 -- exec busybox-7b57f96db7-pnltn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-384928 -- exec busybox-7b57f96db7-wdxr7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.33s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-384928 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-384928 -- exec busybox-7b57f96db7-pnltn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-384928 -- exec busybox-7b57f96db7-pnltn -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-384928 -- exec busybox-7b57f96db7-wdxr7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-384928 -- exec busybox-7b57f96db7-wdxr7 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (25.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-384928 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-384928 -v=5 --alsologtostderr: (25.235044716s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (25.83s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-384928 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 cp testdata/cp-test.txt multinode-384928:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 ssh -n multinode-384928 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 cp multinode-384928:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1537588437/001/cp-test_multinode-384928.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 ssh -n multinode-384928 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 cp multinode-384928:/home/docker/cp-test.txt multinode-384928-m02:/home/docker/cp-test_multinode-384928_multinode-384928-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 ssh -n multinode-384928 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 ssh -n multinode-384928-m02 "sudo cat /home/docker/cp-test_multinode-384928_multinode-384928-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 cp multinode-384928:/home/docker/cp-test.txt multinode-384928-m03:/home/docker/cp-test_multinode-384928_multinode-384928-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 ssh -n multinode-384928 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 ssh -n multinode-384928-m03 "sudo cat /home/docker/cp-test_multinode-384928_multinode-384928-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 cp testdata/cp-test.txt multinode-384928-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 ssh -n multinode-384928-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 cp multinode-384928-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1537588437/001/cp-test_multinode-384928-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 ssh -n multinode-384928-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 cp multinode-384928-m02:/home/docker/cp-test.txt multinode-384928:/home/docker/cp-test_multinode-384928-m02_multinode-384928.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 ssh -n multinode-384928-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 ssh -n multinode-384928 "sudo cat /home/docker/cp-test_multinode-384928-m02_multinode-384928.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 cp multinode-384928-m02:/home/docker/cp-test.txt multinode-384928-m03:/home/docker/cp-test_multinode-384928-m02_multinode-384928-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 ssh -n multinode-384928-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 ssh -n multinode-384928-m03 "sudo cat /home/docker/cp-test_multinode-384928-m02_multinode-384928-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 cp testdata/cp-test.txt multinode-384928-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 ssh -n multinode-384928-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 cp multinode-384928-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1537588437/001/cp-test_multinode-384928-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 ssh -n multinode-384928-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 cp multinode-384928-m03:/home/docker/cp-test.txt multinode-384928:/home/docker/cp-test_multinode-384928-m03_multinode-384928.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 ssh -n multinode-384928-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 ssh -n multinode-384928 "sudo cat /home/docker/cp-test_multinode-384928-m03_multinode-384928.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 cp multinode-384928-m03:/home/docker/cp-test.txt multinode-384928-m02:/home/docker/cp-test_multinode-384928-m03_multinode-384928-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 ssh -n multinode-384928-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 ssh -n multinode-384928-m02 "sudo cat /home/docker/cp-test_multinode-384928-m03_multinode-384928-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-384928 node stop m03: (1.251142539s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-384928 status: exit status 7 (458.295445ms)

                                                
                                                
-- stdout --
	multinode-384928
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-384928-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-384928-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-384928 status --alsologtostderr: exit status 7 (461.983972ms)

                                                
                                                
-- stdout --
	multinode-384928
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-384928-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-384928-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:28:36.567428  153769 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:28:36.567667  153769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:28:36.567675  153769 out.go:374] Setting ErrFile to fd 2...
	I1121 14:28:36.567679  153769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:28:36.567865  153769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:28:36.568017  153769 out.go:368] Setting JSON to false
	I1121 14:28:36.568044  153769 mustload.go:66] Loading cluster: multinode-384928
	I1121 14:28:36.568134  153769 notify.go:221] Checking for updates...
	I1121 14:28:36.568349  153769 config.go:182] Loaded profile config "multinode-384928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:28:36.568361  153769 status.go:174] checking status of multinode-384928 ...
	I1121 14:28:36.569870  153769 cli_runner.go:164] Run: docker container inspect multinode-384928 --format={{.State.Status}}
	I1121 14:28:36.590698  153769 status.go:371] multinode-384928 host status = "Running" (err=<nil>)
	I1121 14:28:36.590737  153769 host.go:66] Checking if "multinode-384928" exists ...
	I1121 14:28:36.590981  153769 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-384928
	I1121 14:28:36.607241  153769 host.go:66] Checking if "multinode-384928" exists ...
	I1121 14:28:36.607464  153769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:28:36.607506  153769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384928
	I1121 14:28:36.623108  153769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/multinode-384928/id_rsa Username:docker}
	I1121 14:28:36.713124  153769 ssh_runner.go:195] Run: systemctl --version
	I1121 14:28:36.719001  153769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:28:36.729864  153769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:28:36.781983  153769 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-21 14:28:36.772964011 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:28:36.782608  153769 kubeconfig.go:125] found "multinode-384928" server: "https://192.168.67.2:8443"
	I1121 14:28:36.782637  153769 api_server.go:166] Checking apiserver status ...
	I1121 14:28:36.782676  153769 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 14:28:36.793566  153769 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup
	W1121 14:28:36.801278  153769 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1121 14:28:36.801314  153769 ssh_runner.go:195] Run: ls
	I1121 14:28:36.804689  153769 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1121 14:28:36.808343  153769 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1121 14:28:36.808362  153769 status.go:463] multinode-384928 apiserver status = Running (err=<nil>)
	I1121 14:28:36.808370  153769 status.go:176] multinode-384928 status: &{Name:multinode-384928 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:28:36.808386  153769 status.go:174] checking status of multinode-384928-m02 ...
	I1121 14:28:36.808658  153769 cli_runner.go:164] Run: docker container inspect multinode-384928-m02 --format={{.State.Status}}
	I1121 14:28:36.824741  153769 status.go:371] multinode-384928-m02 host status = "Running" (err=<nil>)
	I1121 14:28:36.824757  153769 host.go:66] Checking if "multinode-384928-m02" exists ...
	I1121 14:28:36.824999  153769 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-384928-m02
	I1121 14:28:36.840955  153769 host.go:66] Checking if "multinode-384928-m02" exists ...
	I1121 14:28:36.841167  153769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 14:28:36.841203  153769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-384928-m02
	I1121 14:28:36.857012  153769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21847-11045/.minikube/machines/multinode-384928-m02/id_rsa Username:docker}
	I1121 14:28:36.946943  153769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 14:28:36.958350  153769 status.go:176] multinode-384928-m02 status: &{Name:multinode-384928-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:28:36.958375  153769 status.go:174] checking status of multinode-384928-m03 ...
	I1121 14:28:36.958642  153769 cli_runner.go:164] Run: docker container inspect multinode-384928-m03 --format={{.State.Status}}
	I1121 14:28:36.974281  153769 status.go:371] multinode-384928-m03 host status = "Stopped" (err=<nil>)
	I1121 14:28:36.974296  153769 status.go:384] host is not running, skipping remaining checks
	I1121 14:28:36.974301  153769 status.go:176] multinode-384928-m03 status: &{Name:multinode-384928-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-384928 node start m03 -v=5 --alsologtostderr: (6.381998503s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (55.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-384928
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-384928
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-384928: (31.327406682s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-384928 --wait=true -v=5 --alsologtostderr
E1121 14:29:23.292498   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-384928 --wait=true -v=5 --alsologtostderr: (24.435835791s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-384928
--- PASS: TestMultiNode/serial/RestartKeepsNodes (55.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-384928 node delete m03: (4.32053992s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (17.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-384928 stop: (17.356698678s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-384928 status: exit status 7 (92.197436ms)

                                                
                                                
-- stdout --
	multinode-384928
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-384928-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-384928 status --alsologtostderr: exit status 7 (91.736572ms)

                                                
                                                
-- stdout --
	multinode-384928
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-384928-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:30:02.322774  162670 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:30:02.322874  162670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:30:02.322885  162670 out.go:374] Setting ErrFile to fd 2...
	I1121 14:30:02.322890  162670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:30:02.323082  162670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:30:02.323309  162670 out.go:368] Setting JSON to false
	I1121 14:30:02.323338  162670 mustload.go:66] Loading cluster: multinode-384928
	I1121 14:30:02.323424  162670 notify.go:221] Checking for updates...
	I1121 14:30:02.323800  162670 config.go:182] Loaded profile config "multinode-384928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:30:02.323820  162670 status.go:174] checking status of multinode-384928 ...
	I1121 14:30:02.324280  162670 cli_runner.go:164] Run: docker container inspect multinode-384928 --format={{.State.Status}}
	I1121 14:30:02.342784  162670 status.go:371] multinode-384928 host status = "Stopped" (err=<nil>)
	I1121 14:30:02.342812  162670 status.go:384] host is not running, skipping remaining checks
	I1121 14:30:02.342817  162670 status.go:176] multinode-384928 status: &{Name:multinode-384928 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 14:30:02.342840  162670 status.go:174] checking status of multinode-384928-m02 ...
	I1121 14:30:02.343068  162670 cli_runner.go:164] Run: docker container inspect multinode-384928-m02 --format={{.State.Status}}
	I1121 14:30:02.359277  162670 status.go:371] multinode-384928-m02 host status = "Stopped" (err=<nil>)
	I1121 14:30:02.359299  162670 status.go:384] host is not running, skipping remaining checks
	I1121 14:30:02.359307  162670 status.go:176] multinode-384928-m02 status: &{Name:multinode-384928-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (17.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (26.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-384928 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-384928 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (25.566058629s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-384928 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (26.13s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-384928
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-384928-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-384928-m02 --driver=docker  --container-runtime=crio: exit status 14 (73.749677ms)

                                                
                                                
-- stdout --
	* [multinode-384928-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-384928-m02' is duplicated with machine name 'multinode-384928-m02' in profile 'multinode-384928'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-384928-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-384928-m03 --driver=docker  --container-runtime=crio: (20.107557614s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-384928
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-384928: exit status 80 (277.033382ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-384928 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-384928-m03 already exists in multinode-384928-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-384928-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-384928-m03: (2.306155878s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.82s)

                                                
                                    
x
+
TestPreload (81.64s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-151159 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-151159 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (44.766025646s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-151159 image pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-151159
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-151159: (5.804222885s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-151159 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-151159 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (27.689595904s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-151159 image list
helpers_test.go:175: Cleaning up "test-preload-151159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-151159
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-151159: (2.327711878s)
--- PASS: TestPreload (81.64s)

                                                
                                    
x
+
TestScheduledStopUnix (98.95s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-265028 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-265028 --memory=3072 --driver=docker  --container-runtime=crio: (22.656260533s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-265028 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1121 14:32:39.750237  179516 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:32:39.750586  179516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:32:39.750595  179516 out.go:374] Setting ErrFile to fd 2...
	I1121 14:32:39.750599  179516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:32:39.750800  179516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:32:39.751015  179516 out.go:368] Setting JSON to false
	I1121 14:32:39.751106  179516 mustload.go:66] Loading cluster: scheduled-stop-265028
	I1121 14:32:39.751402  179516 config.go:182] Loaded profile config "scheduled-stop-265028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:32:39.751466  179516 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/scheduled-stop-265028/config.json ...
	I1121 14:32:39.751668  179516 mustload.go:66] Loading cluster: scheduled-stop-265028
	I1121 14:32:39.751775  179516 config.go:182] Loaded profile config "scheduled-stop-265028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-265028 -n scheduled-stop-265028
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-265028 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1121 14:32:40.112642  179665 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:32:40.112892  179665 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:32:40.112900  179665 out.go:374] Setting ErrFile to fd 2...
	I1121 14:32:40.112905  179665 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:32:40.113071  179665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:32:40.113269  179665 out.go:368] Setting JSON to false
	I1121 14:32:40.113431  179665 daemonize_unix.go:73] killing process 179553 as it is an old scheduled stop
	I1121 14:32:40.113538  179665 mustload.go:66] Loading cluster: scheduled-stop-265028
	I1121 14:32:40.113870  179665 config.go:182] Loaded profile config "scheduled-stop-265028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:32:40.113941  179665 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/scheduled-stop-265028/config.json ...
	I1121 14:32:40.114112  179665 mustload.go:66] Loading cluster: scheduled-stop-265028
	I1121 14:32:40.114204  179665 config.go:182] Loaded profile config "scheduled-stop-265028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1121 14:32:40.118132   14542 retry.go:31] will retry after 146.994µs: open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/scheduled-stop-265028/pid: no such file or directory
I1121 14:32:40.119242   14542 retry.go:31] will retry after 176.441µs: open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/scheduled-stop-265028/pid: no such file or directory
I1121 14:32:40.120350   14542 retry.go:31] will retry after 309.93µs: open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/scheduled-stop-265028/pid: no such file or directory
I1121 14:32:40.121469   14542 retry.go:31] will retry after 175.118µs: open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/scheduled-stop-265028/pid: no such file or directory
I1121 14:32:40.122615   14542 retry.go:31] will retry after 343.114µs: open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/scheduled-stop-265028/pid: no such file or directory
I1121 14:32:40.123730   14542 retry.go:31] will retry after 455.064µs: open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/scheduled-stop-265028/pid: no such file or directory
I1121 14:32:40.124861   14542 retry.go:31] will retry after 720.993µs: open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/scheduled-stop-265028/pid: no such file or directory
I1121 14:32:40.125989   14542 retry.go:31] will retry after 1.770044ms: open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/scheduled-stop-265028/pid: no such file or directory
I1121 14:32:40.128189   14542 retry.go:31] will retry after 1.341172ms: open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/scheduled-stop-265028/pid: no such file or directory
I1121 14:32:40.130381   14542 retry.go:31] will retry after 3.137503ms: open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/scheduled-stop-265028/pid: no such file or directory
I1121 14:32:40.134593   14542 retry.go:31] will retry after 3.5949ms: open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/scheduled-stop-265028/pid: no such file or directory
I1121 14:32:40.138800   14542 retry.go:31] will retry after 9.600438ms: open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/scheduled-stop-265028/pid: no such file or directory
I1121 14:32:40.148958   14542 retry.go:31] will retry after 17.748488ms: open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/scheduled-stop-265028/pid: no such file or directory
I1121 14:32:40.167155   14542 retry.go:31] will retry after 13.357329ms: open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/scheduled-stop-265028/pid: no such file or directory
I1121 14:32:40.181356   14542 retry.go:31] will retry after 29.774894ms: open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/scheduled-stop-265028/pid: no such file or directory
I1121 14:32:40.211845   14542 retry.go:31] will retry after 22.394972ms: open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/scheduled-stop-265028/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-265028 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1121 14:32:52.778020   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-265028 -n scheduled-stop-265028
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-265028
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-265028 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1121 14:33:05.952678  180229 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:33:05.952910  180229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:33:05.952919  180229 out.go:374] Setting ErrFile to fd 2...
	I1121 14:33:05.952923  180229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:33:05.953101  180229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:33:05.953307  180229 out.go:368] Setting JSON to false
	I1121 14:33:05.953379  180229 mustload.go:66] Loading cluster: scheduled-stop-265028
	I1121 14:33:05.953708  180229 config.go:182] Loaded profile config "scheduled-stop-265028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:33:05.953772  180229 profile.go:143] Saving config to /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/scheduled-stop-265028/config.json ...
	I1121 14:33:05.953951  180229 mustload.go:66] Loading cluster: scheduled-stop-265028
	I1121 14:33:05.954039  180229 config.go:182] Loaded profile config "scheduled-stop-265028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-265028
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-265028: exit status 7 (73.559431ms)

                                                
                                                
-- stdout --
	scheduled-stop-265028
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-265028 -n scheduled-stop-265028
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-265028 -n scheduled-stop-265028: exit status 7 (72.939926ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-265028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-265028
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-265028: (4.866715569s)
--- PASS: TestScheduledStopUnix (98.95s)

                                                
                                    
x
+
TestInsufficientStorage (12.44s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-585951 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-585951 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.032798362s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"99b149a1-66d8-4003-9806-a938ec69ab41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-585951] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f5556702-6135-4b66-b422-2c1f2db4279a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21847"}}
	{"specversion":"1.0","id":"17d0a238-8687-4307-bdcf-335eb732bf09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fdd81894-f9aa-4e03-8164-f8e1e8dd0300","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig"}}
	{"specversion":"1.0","id":"c1d3f160-15cc-4693-bf47-14571223fd48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube"}}
	{"specversion":"1.0","id":"e1813161-3f2d-423d-9917-0e4019fd3709","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7e2ba07c-5da8-4cc3-ad82-5146f5a6cb50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"675dc443-89a5-458b-a34d-784db3fffdd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"82b83d64-dc56-4cd3-bf92-63e5ed18aff7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e6bbae8a-3e8a-4876-8b96-ef292b04f723","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c4d6978a-b1c1-4450-afcf-84527b710d4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e63fd036-140e-40af-9000-018776fedee6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-585951\" primary control-plane node in \"insufficient-storage-585951\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1227ade3-e763-4817-8f83-8c79ae7ba109","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763507788-21924 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c3408114-feff-445b-9ee9-a1bee86e6ee5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1070342b-95c6-4150-8efb-81a6247c2759","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-585951 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-585951 --output=json --layout=cluster: exit status 7 (274.448891ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-585951","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-585951","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1121 14:34:06.279862  182761 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-585951" does not appear in /home/jenkins/minikube-integration/21847-11045/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-585951 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-585951 --output=json --layout=cluster: exit status 7 (272.860881ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-585951","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-585951","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1121 14:34:06.552991  182875 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-585951" does not appear in /home/jenkins/minikube-integration/21847-11045/kubeconfig
	E1121 14:34:06.562861  182875 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/insufficient-storage-585951/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-585951" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-585951
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-585951: (1.857221558s)
--- PASS: TestInsufficientStorage (12.44s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (109s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.956447936 start -p running-upgrade-058310 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1121 14:34:23.292358   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.956447936 start -p running-upgrade-058310 --memory=3072 --vm-driver=docker  --container-runtime=crio: (1m20.186643312s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-058310 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-058310 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.821386475s)
helpers_test.go:175: Cleaning up "running-upgrade-058310" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-058310
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-058310: (2.402321506s)
--- PASS: TestRunningBinaryUpgrade (109.00s)

                                                
                                    
x
+
TestKubernetesUpgrade (300.66s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-214044 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-214044 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.13117335s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-214044
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-214044: (2.431424007s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-214044 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-214044 status --format={{.Host}}: exit status 7 (108.384603ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-214044 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-214044 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.204258913s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-214044 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-214044 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-214044 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (73.535098ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-214044] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-214044
	    minikube start -p kubernetes-upgrade-214044 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2140442 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-214044 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-214044 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-214044 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.155062442s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-214044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-214044
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-214044: (2.498476274s)
--- PASS: TestKubernetesUpgrade (300.66s)

                                                
                                    
x
+
TestMissingContainerUpgrade (126.23s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2694635486 start -p missing-upgrade-928614 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2694635486 start -p missing-upgrade-928614 --memory=3072 --driver=docker  --container-runtime=crio: (1m19.840461839s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-928614
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-928614
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-928614 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1121 14:35:46.355051   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-928614 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.578288835s)
helpers_test.go:175: Cleaning up "missing-upgrade-928614" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-928614
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-928614: (2.437981443s)
--- PASS: TestMissingContainerUpgrade (126.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-973075 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-973075 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (81.826798ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-973075] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (29.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-973075 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-973075 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.213533461s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-973075 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (29.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (28.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-973075 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-973075 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.347455297s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-973075 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-973075 status -o json: exit status 2 (342.061682ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-973075","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-973075
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-973075: (4.652552626s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (28.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-973075 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-973075 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (10.155982622s)
--- PASS: TestNoKubernetes/serial/Start (10.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21847-11045/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-973075 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-973075 "sudo systemctl is-active --quiet service kubelet": exit status 1 (305.36422ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-973075
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-973075: (1.292825542s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-973075 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-973075 --driver=docker  --container-runtime=crio: (6.825900724s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-973075 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-973075 "sudo systemctl is-active --quiet service kubelet": exit status 1 (321.182682ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (40.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.4022078614 start -p stopped-upgrade-098657 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.4022078614 start -p stopped-upgrade-098657 --memory=3072 --vm-driver=docker  --container-runtime=crio: (21.467613071s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.4022078614 -p stopped-upgrade-098657 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.4022078614 -p stopped-upgrade-098657 stop: (4.300165952s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-098657 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1121 14:35:55.852104   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-098657 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.86060747s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (40.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-098657
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-098657: (1.010316453s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-989875 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-989875 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (177.715019ms)

                                                
                                                
-- stdout --
	* [false-989875] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21847
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 14:36:17.454340  218330 out.go:360] Setting OutFile to fd 1 ...
	I1121 14:36:17.454648  218330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:36:17.454659  218330 out.go:374] Setting ErrFile to fd 2...
	I1121 14:36:17.454664  218330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 14:36:17.454993  218330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21847-11045/.minikube/bin
	I1121 14:36:17.455554  218330 out.go:368] Setting JSON to false
	I1121 14:36:17.456949  218330 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4726,"bootTime":1763731051,"procs":290,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 14:36:17.457053  218330 start.go:143] virtualization: kvm guest
	I1121 14:36:17.459540  218330 out.go:179] * [false-989875] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 14:36:17.461120  218330 out.go:179]   - MINIKUBE_LOCATION=21847
	I1121 14:36:17.461143  218330 notify.go:221] Checking for updates...
	I1121 14:36:17.463366  218330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 14:36:17.464436  218330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21847-11045/kubeconfig
	I1121 14:36:17.465443  218330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21847-11045/.minikube
	I1121 14:36:17.466506  218330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 14:36:17.467546  218330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 14:36:17.472726  218330 config.go:182] Loaded profile config "force-systemd-env-653926": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:36:17.472865  218330 config.go:182] Loaded profile config "force-systemd-flag-085432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:36:17.472991  218330 config.go:182] Loaded profile config "kubernetes-upgrade-214044": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 14:36:17.473128  218330 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 14:36:17.496999  218330 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 14:36:17.497147  218330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 14:36:17.558754  218330 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:83 SystemTime:2025-11-21 14:36:17.547937083 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 14:36:17.558893  218330 docker.go:319] overlay module found
	I1121 14:36:17.560660  218330 out.go:179] * Using the docker driver based on user configuration
	I1121 14:36:17.561742  218330 start.go:309] selected driver: docker
	I1121 14:36:17.561757  218330 start.go:930] validating driver "docker" against <nil>
	I1121 14:36:17.561778  218330 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 14:36:17.563284  218330 out.go:203] 
	W1121 14:36:17.564412  218330 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1121 14:36:17.565411  218330 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-989875 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-989875

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-989875

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-989875

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-989875

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-989875

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-989875

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-989875

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-989875

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-989875

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-989875

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-989875

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-989875" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-989875" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:35:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-214044
contexts:
- context:
cluster: kubernetes-upgrade-214044
user: kubernetes-upgrade-214044
name: kubernetes-upgrade-214044
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-214044
user:
client-certificate: /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/kubernetes-upgrade-214044/client.crt
client-key: /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/kubernetes-upgrade-214044/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-989875

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989875"

                                                
                                                
----------------------- debugLogs end: false-989875 [took: 4.773139399s] --------------------------------
helpers_test.go:175: Cleaning up "false-989875" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-989875
--- PASS: TestNetworkPlugins/group/false (5.11s)

                                                
                                    
x
+
TestPause/serial/Start (43.55s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-738756 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-738756 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (43.547201071s)
--- PASS: TestPause/serial/Start (43.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-794941 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-794941 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.036560259s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (51.04s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.98s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-738756 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-738756 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.970512684s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (48.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-589411 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-589411 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.442218764s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (48.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-794941 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d07a0f79-8b73-4999-a3a1-654a71184bf3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d07a0f79-8b73-4999-a3a1-654a71184bf3] Running
E1121 14:37:52.778203   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/addons-243127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003982506s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-794941 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-794941 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-794941 --alsologtostderr -v=3: (16.397926248s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-794941 -n old-k8s-version-794941
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-794941 -n old-k8s-version-794941: exit status 7 (84.459061ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-794941 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (27.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-794941 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-794941 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (27.34797657s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-794941 -n old-k8s-version-794941
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (27.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-589411 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [00913493-ebe3-475f-bad9-5f049f9a6389] Pending
helpers_test.go:352: "busybox" [00913493-ebe3-475f-bad9-5f049f9a6389] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [00913493-ebe3-475f-bad9-5f049f9a6389] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004483541s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-589411 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-589411 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-589411 --alsologtostderr -v=3: (16.38567421s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-lv25l" [876cea80-de57-4e49-bcb2-c83a9dddd295] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002925517s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-lv25l" [876cea80-de57-4e49-bcb2-c83a9dddd295] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003767698s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-794941 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-794941 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-589411 -n no-preload-589411
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-589411 -n no-preload-589411: exit status 7 (74.370804ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-589411 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (47.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-589411 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-589411 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (47.145591885s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-589411 -n no-preload-589411
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (47.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (42.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-441390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1121 14:39:23.292030   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/functional-179014/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-441390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (42.318343777s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (42.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hc2j2" [00c7cb49-8fbf-4ec1-9de5-57b0f563f326] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003488449s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-441390 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e3e88ed6-52f6-4e30-97ba-30031a549261] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e3e88ed6-52f6-4e30-97ba-30031a549261] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.00364044s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-441390 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hc2j2" [00c7cb49-8fbf-4ec1-9de5-57b0f563f326] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004423286s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-589411 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-589411 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (17.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-441390 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-441390 --alsologtostderr -v=3: (17.074617514s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (17.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-859276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-859276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m12.861659289s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-696683 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-696683 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (28.931147613s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (46.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (46.690855805s)
--- PASS: TestNetworkPlugins/group/auto/Start (46.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-441390 -n embed-certs-441390
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-441390 -n embed-certs-441390: exit status 7 (95.332081ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-441390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (27.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-441390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-441390 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (27.579531207s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-441390 -n embed-certs-441390
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (27.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-696683 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-696683 --alsologtostderr -v=3: (2.991314338s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-696683 -n newest-cni-696683
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-696683 -n newest-cni-696683: exit status 7 (97.287886ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-696683 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-696683 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-696683 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.272191958s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-696683 -n newest-cni-696683
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hp5ll" [fa58c7e1-02da-426d-a23c-4e127db4c9ae] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003340916s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hp5ll" [fa58c7e1-02da-426d-a23c-4e127db4c9ae] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003390738s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-441390 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-696683 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-441390 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-989875 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-989875 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7qbnd" [ca235bc6-bef7-4a05-b83a-f5420fa37ae7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7qbnd" [ca235bc6-bef7-4a05-b83a-f5420fa37ae7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.003944113s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (42.667199163s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (52.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (52.116079442s)
--- PASS: TestNetworkPlugins/group/calico/Start (52.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-989875 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-989875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-989875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-859276 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [efb20c28-6dae-485c-8d5b-dad4254c5f4a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [efb20c28-6dae-485c-8d5b-dad4254c5f4a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004428599s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-859276 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (49.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (49.207169894s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (49.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-859276 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-859276 --alsologtostderr -v=3: (18.295641809s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-7g2lx" [1a034eac-5ef0-4921-8628-f33ebe55ec88] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003979553s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-859276 -n default-k8s-diff-port-859276
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-859276 -n default-k8s-diff-port-859276: exit status 7 (108.226851ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-859276 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-859276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-859276 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.048126259s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-859276 -n default-k8s-diff-port-859276
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-989875 "pgrep -a kubelet"
I1121 14:41:47.723164   14542 config.go:182] Loaded profile config "kindnet-989875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-989875 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-59c52" [d13e3649-240a-4272-ae2a-bd1754dc38bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-59c52" [d13e3649-240a-4272-ae2a-bd1754dc38bb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003109859s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-zqtqj" [5254af52-ede0-44e6-a66c-1f23fcfb7d33] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003514528s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-989875 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-989875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-989875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-989875 "pgrep -a kubelet"
I1121 14:41:58.983157   14542 config.go:182] Loaded profile config "calico-989875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-989875 replace --force -f testdata/netcat-deployment.yaml
I1121 14:41:59.227960   14542 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1121 14:41:59.233008   14542 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kdsmz" [74a16380-afa1-45c3-8bd3-ff0c38335b9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kdsmz" [74a16380-afa1-45c3-8bd3-ff0c38335b9a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003898094s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-989875 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-989875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-989875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-989875 "pgrep -a kubelet"
I1121 14:42:11.389486   14542 config.go:182] Loaded profile config "custom-flannel-989875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-989875 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mrxbp" [3f5d618e-e98a-4892-9fff-bf4d15a68f13] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mrxbp" [3f5d618e-e98a-4892-9fff-bf4d15a68f13] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.013025158s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (67.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m7.453500263s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (67.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-989875 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-989875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-989875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (50.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (50.984273056s)
--- PASS: TestNetworkPlugins/group/flannel/Start (50.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-j7rcv" [3ac90d1e-9e41-4140-9b48-08e97b11f9e7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003629658s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-j7rcv" [3ac90d1e-9e41-4140-9b48-08e97b11f9e7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003571312s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-859276 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-859276 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (60.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-989875 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m0.986201005s)
--- PASS: TestNetworkPlugins/group/bridge/Start (60.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-nn8s9" [d8426997-7427-46c5-aea2-a2d2655c51bc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004017549s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-989875 "pgrep -a kubelet"
I1121 14:43:24.268435   14542 config.go:182] Loaded profile config "enable-default-cni-989875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-989875 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wz6qj" [0138ec68-bb96-4300-8d21-1bb0d53b0d51] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wz6qj" [0138ec68-bb96-4300-8d21-1bb0d53b0d51] Running
E1121 14:43:28.562368   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/no-preload-589411/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:43:28.568708   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/no-preload-589411/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:43:28.580026   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/no-preload-589411/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:43:28.601334   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/no-preload-589411/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:43:28.642654   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/no-preload-589411/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:43:28.723990   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/no-preload-589411/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:43:28.885475   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/no-preload-589411/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003162168s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-989875 "pgrep -a kubelet"
I1121 14:43:25.831422   14542 config.go:182] Loaded profile config "flannel-989875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-989875 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xp892" [69fb9d22-e00f-4f36-acdd-5a01a1d66285] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xp892" [69fb9d22-e00f-4f36-acdd-5a01a1d66285] Running
E1121 14:43:29.207128   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/no-preload-589411/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:43:29.272610   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/old-k8s-version-794941/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:43:29.848687   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/no-preload-589411/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 14:43:31.130792   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/no-preload-589411/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003720078s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-989875 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-989875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-989875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-989875 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-989875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-989875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-989875 "pgrep -a kubelet"
I1121 14:43:43.263234   14542 config.go:182] Loaded profile config "bridge-989875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-989875 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wdl5x" [2bdd3208-99e6-4601-ba7e-d19c4da88abb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wdl5x" [2bdd3208-99e6-4601-ba7e-d19c4da88abb] Running
E1121 14:43:49.057094   14542 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/no-preload-589411/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003768586s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-989875 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-989875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-989875 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    

Test skip (27/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-708207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-708207
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-989875 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-989875

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-989875

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-989875

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-989875

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-989875

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-989875

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-989875

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-989875

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-989875

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-989875

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-989875

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-989875" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-989875" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:35:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-214044
contexts:
- context:
cluster: kubernetes-upgrade-214044
user: kubernetes-upgrade-214044
name: kubernetes-upgrade-214044
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-214044
user:
client-certificate: /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/kubernetes-upgrade-214044/client.crt
client-key: /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/kubernetes-upgrade-214044/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-989875

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989875"

                                                
                                                
----------------------- debugLogs end: kubenet-989875 [took: 3.640816552s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-989875" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-989875
--- SKIP: TestNetworkPlugins/group/kubenet (3.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-989875 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-989875

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-989875

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-989875

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-989875

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-989875

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-989875

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-989875

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-989875

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-989875

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-989875

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-989875

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-989875" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-989875

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-989875

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-989875

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-989875

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-989875" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-989875" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:36:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: force-systemd-flag-085432
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21847-11045/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:35:39 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-214044
contexts:
- context:
cluster: force-systemd-flag-085432
extensions:
- extension:
last-update: Fri, 21 Nov 2025 14:36:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: force-systemd-flag-085432
name: force-systemd-flag-085432
- context:
cluster: kubernetes-upgrade-214044
user: kubernetes-upgrade-214044
name: kubernetes-upgrade-214044
current-context: force-systemd-flag-085432
kind: Config
users:
- name: force-systemd-flag-085432
user:
client-certificate: /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/force-systemd-flag-085432/client.crt
client-key: /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/force-systemd-flag-085432/client.key
- name: kubernetes-upgrade-214044
user:
client-certificate: /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/kubernetes-upgrade-214044/client.crt
client-key: /home/jenkins/minikube-integration/21847-11045/.minikube/profiles/kubernetes-upgrade-214044/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-989875

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-989875" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989875"

                                                
                                                
----------------------- debugLogs end: cilium-989875 [took: 3.823699533s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-989875" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-989875
--- SKIP: TestNetworkPlugins/group/cilium (3.98s)

                                                
                                    
Copied to clipboard